Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141252 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-20782 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-8ZMTj1Qa60WL/agent.2109 SSH_AGENT_PID=2111 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_6710201067431567863.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_6710201067431567863.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/52/141252/5 # timeout=30 > git rev-parse 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c (refs/changes/52/141252/5) > git config core.sparsecheckout # timeout=10 > git checkout -f 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=30 Commit message: "Remove VFC from docker compose and helm configurations" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk 1e361efcd8a4b3caab4f41f34078024e85ac9d73 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins40620125099813531.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-vXym lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-vXym/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-vXym/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.4.26 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins11212595532629477804.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins7493962309028120949.sh + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 73.6M 0 --:--:-- --:--:-- --:--:-- 73.6M Setting project configuration for: opa-pdp Configuring docker compose... Starting opa-pdp using postgres + Grafana/Prometheus policy-db-migrator Pulling kafka Pulling opa-pdp Pulling grafana Pulling zookeeper Pulling prometheus Pulling postgres Pulling pap Pulling api Pulling da9db072f522 Pulling fs layer e0a9246a993d Pulling fs layer cfced280df5c Pulling fs layer 9b25651cb121 Pulling fs layer dcd39227e97c Pulling fs layer df64a37e7642 Pulling fs layer e61887631f20 Pulling fs layer e6a09573dd7b Pulling fs layer df64a37e7642 Waiting e61887631f20 Waiting e6a09573dd7b Waiting 9b25651cb121 Waiting dcd39227e97c Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 96e38c8865ba Waiting 5e06c6bed798 Waiting 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB cfced280df5c Downloading [> ] 146.4kB/14.63MB e0a9246a993d Downloading [> ] 539.6kB/71.91MB f90c8eb4724c Pulling fs layer 2b1b549e99de Pulling fs layer 547372ea8ffa Pulling fs layer 65d25c0f02f3 Pulling fs layer 90dd78f85976 Pulling fs layer 4f4fb700ef54 Pulling fs layer f90c8eb4724c Waiting 2b1b549e99de Waiting 547372ea8ffa Waiting 65d25c0f02f3 Waiting 4f4fb700ef54 Waiting 90dd78f85976 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer e5d7009d9e55 Waiting c124ba1a8b26 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 6394804c2196 Pulling fs layer 1ec5fb03eaee Waiting 96e38c8865ba Waiting d3165a332ae3 Waiting 6394804c2196 Waiting c124ba1a8b26 Waiting f18232174bc9 Pulling fs layer 65babbe3dfe5 Pulling fs layer 651b0ba49b07 Pulling fs layer d953cde4314b Pulling fs layer aecd4cb03450 Pulling fs layer 13fa68ca8757 Pulling fs layer f836d47fdc4d Pulling fs layer 8b5292c940e1 Pulling fs layer 454a4350d439 Pulling fs layer 9a8c18aee5ea Pulling fs layer 651b0ba49b07 Waiting d953cde4314b Waiting aecd4cb03450 Waiting 13fa68ca8757 Waiting f836d47fdc4d Waiting 8b5292c940e1 Waiting 454a4350d439 Waiting f18232174bc9 Waiting 65babbe3dfe5 Waiting 9a8c18aee5ea Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 9fa9226be034 Waiting f3b09c502777 Waiting 7221d93db8a9 Waiting 408012a7b118 Waiting 44986281b8b9 Waiting 1ccde423731d Waiting 6ac0e4adf315 Waiting 1617e25568b2 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 55f2b468da67 Waiting 82bfc142787e Waiting 46baca71a4ef Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting 1e017ebebdbd Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 46eab5b44a35 Waiting c4d302cc468d Waiting 01e0882c90d9 Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting e27c75a98748 Waiting e73cb4a42719 Waiting a83b68436f09 Waiting 787d6bee9571 Waiting 2d429b9e73a6 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 45fd2fec8a19 Waiting 10f05dd8b1db Waiting 41dac8b43ba6 Waiting 8f10199ed94b Waiting 71a9f6a9ab4d Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting f963a77d2726 Waiting f3a82e9f1761 Waiting 79161a3f5362 Waiting 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting eca0188f477e Waiting e444bcd4d577 Waiting eabd8714fec9 Waiting 9b25651cb121 Downloading [==================================================>] 1.074kB/1.074kB 9b25651cb121 Verifying Checksum 9b25651cb121 Download complete dcd39227e97c Downloading [============================> ] 3.003kB/5.241kB dcd39227e97c Download complete cfced280df5c Downloading [===========================> ] 7.962MB/14.63MB df64a37e7642 Downloading [==================================================>] 1.032kB/1.032kB df64a37e7642 Verifying Checksum df64a37e7642 Download complete e0a9246a993d Downloading [=======> ] 10.81MB/71.91MB e61887631f20 Downloading [==================================================>] 1.033kB/1.033kB e61887631f20 Verifying Checksum e61887631f20 Download complete cfced280df5c Verifying Checksum cfced280df5c Download complete e6a09573dd7b Downloading [=======> ] 3.002kB/19.52kB e6a09573dd7b Downloading [==================================================>] 19.52kB/19.52kB e6a09573dd7b Download complete da9db072f522 Extracting [==============> ] 1.049MB/3.624MB da9db072f522 Extracting [==============> ] 1.049MB/3.624MB da9db072f522 Extracting [==============> ] 1.049MB/3.624MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB e0a9246a993d Downloading [=================> ] 24.87MB/71.91MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB e0a9246a993d Downloading [============================> ] 41.09MB/71.91MB dcc0c3b2850c Downloading [====> ] 7.028MB/76.12MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.71MB/71.91MB e0a9246a993d Downloading [========================================> ] 58.39MB/71.91MB dcc0c3b2850c Downloading [============> ] 18.92MB/76.12MB 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB e0a9246a993d Verifying Checksum e0a9246a993d Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Download complete dcc0c3b2850c Downloading [====================> ] 31.9MB/76.12MB f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB e0a9246a993d Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [===============================> ] 48.66MB/76.12MB f90c8eb4724c Downloading [======> ] 3.734MB/30.59MB 96e38c8865ba Downloading [=================================================> ] 71.37MB/71.91MB 96e38c8865ba Downloading [=================================================> ] 71.37MB/71.91MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete e0a9246a993d Extracting [===> ] 4.456MB/71.91MB 2b1b549e99de Downloading [> ] 31.67kB/2.646MB dcc0c3b2850c Downloading [===========================================> ] 65.96MB/76.12MB f90c8eb4724c Downloading [======================> ] 13.7MB/30.59MB 2b1b549e99de Verifying Checksum 2b1b549e99de Download complete 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB e0a9246a993d Extracting [======> ] 8.913MB/71.91MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 547372ea8ffa Downloading [> ] 130kB/12.63MB 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB f90c8eb4724c Downloading [===========================================> ] 26.77MB/30.59MB f90c8eb4724c Verifying Checksum f90c8eb4724c Download complete 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 547372ea8ffa Downloading [===================================> ] 8.912MB/12.63MB e0a9246a993d Extracting [========> ] 12.26MB/71.91MB 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 547372ea8ffa Verifying Checksum 547372ea8ffa Download complete 65d25c0f02f3 Downloading [===============> ] 9.141MB/28.98MB 4f4fb700ef54 Download complete e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB 90dd78f85976 Downloading [=======> ] 6.389MB/41.49MB e0a9246a993d Extracting [===========> ] 16.15MB/71.91MB d3165a332ae3 Download complete 65d25c0f02f3 Downloading [==================================> ] 20.05MB/28.98MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB f90c8eb4724c Extracting [======> ] 4.26MB/30.59MB 96e38c8865ba Extracting [==========> ] 15.6MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.6MB/71.91MB 65d25c0f02f3 Verifying Checksum 90dd78f85976 Downloading [=================> ] 14.91MB/41.49MB 65d25c0f02f3 Download complete e0a9246a993d Extracting [==============> ] 21.17MB/71.91MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Download complete f18232174bc9 Downloading [> ] 48.06kB/3.642MB c124ba1a8b26 Downloading [==> ] 4.324MB/91.87MB f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB e0a9246a993d Extracting [==================> ] 26.74MB/71.91MB 90dd78f85976 Downloading [==============================> ] 25.56MB/41.49MB f18232174bc9 Downloading [=============================================> ] 3.341MB/3.642MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 65babbe3dfe5 Downloading [==================================================>] 141B/141B 65babbe3dfe5 Verifying Checksum 65babbe3dfe5 Download complete c124ba1a8b26 Downloading [=====> ] 10.81MB/91.87MB 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB f90c8eb4724c Extracting [====================> ] 12.78MB/30.59MB 90dd78f85976 Downloading [============================================> ] 36.63MB/41.49MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB e0a9246a993d Extracting [=====================> ] 31.2MB/71.91MB 90dd78f85976 Verifying Checksum 90dd78f85976 Download complete f18232174bc9 Extracting [============> ] 917.5kB/3.642MB d953cde4314b Downloading [> ] 97.22kB/8.735MB c124ba1a8b26 Downloading [=========> ] 17.3MB/91.87MB 651b0ba49b07 Downloading [==================================> ] 2.407MB/3.524MB f90c8eb4724c Extracting [============================> ] 17.37MB/30.59MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.62MB/71.91MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB e0a9246a993d Extracting [========================> ] 35.09MB/71.91MB 651b0ba49b07 Downloading [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Verifying Checksum 651b0ba49b07 Download complete aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB aecd4cb03450 Verifying Checksum aecd4cb03450 Download complete 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB d953cde4314b Downloading [==========> ] 1.768MB/8.735MB 13fa68ca8757 Download complete f18232174bc9 Pull complete c124ba1a8b26 Downloading [============> ] 22.17MB/91.87MB 65babbe3dfe5 Extracting [==================================================>] 141B/141B 65babbe3dfe5 Extracting [==================================================>] 141B/141B f90c8eb4724c Extracting [==================================> ] 20.97MB/30.59MB 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB e0a9246a993d Extracting [==========================> ] 37.88MB/71.91MB f836d47fdc4d Downloading [> ] 539.6kB/107.3MB d953cde4314b Downloading [===============================> ] 5.504MB/8.735MB c124ba1a8b26 Downloading [===============> ] 29.2MB/91.87MB 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB f90c8eb4724c Extracting [=========================================> ] 25.23MB/30.59MB e0a9246a993d Extracting [=============================> ] 41.78MB/71.91MB 65babbe3dfe5 Pull complete d953cde4314b Verifying Checksum d953cde4314b Download complete 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB f836d47fdc4d Downloading [=> ] 2.162MB/107.3MB c124ba1a8b26 Downloading [==================> ] 33.52MB/91.87MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB e0a9246a993d Extracting [==============================> ] 44.56MB/71.91MB 651b0ba49b07 Extracting [====> ] 327.7kB/3.524MB f836d47fdc4d Downloading [==> ] 4.865MB/107.3MB c124ba1a8b26 Downloading [=====================> ] 40.01MB/91.87MB 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB f90c8eb4724c Extracting [==============================================> ] 28.51MB/30.59MB e0a9246a993d Extracting [================================> ] 47.35MB/71.91MB 8b5292c940e1 Downloading [==> ] 2.702MB/63.48MB 651b0ba49b07 Extracting [===================================> ] 2.49MB/3.524MB f836d47fdc4d Downloading [====> ] 8.65MB/107.3MB c124ba1a8b26 Downloading [=========================> ] 45.96MB/91.87MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB e0a9246a993d Extracting [===================================> ] 51.25MB/71.91MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 8b5292c940e1 Downloading [====> ] 5.946MB/63.48MB f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB f836d47fdc4d Downloading [=====> ] 12.43MB/107.3MB c124ba1a8b26 Downloading [============================> ] 51.9MB/91.87MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 651b0ba49b07 Pull complete f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 8b5292c940e1 Downloading [======> ] 8.109MB/63.48MB d953cde4314b Extracting [> ] 98.3kB/8.735MB e0a9246a993d Extracting [=====================================> ] 53.48MB/71.91MB f836d47fdc4d Downloading [=======> ] 16.22MB/107.3MB c124ba1a8b26 Downloading [===============================> ] 57.31MB/91.87MB f90c8eb4724c Pull complete 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 8b5292c940e1 Downloading [===========> ] 14.6MB/63.48MB d953cde4314b Extracting [==> ] 393.2kB/8.735MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB 96e38c8865ba Extracting [=================================> ] 48.46MB/71.91MB e0a9246a993d Extracting [=======================================> ] 56.82MB/71.91MB f836d47fdc4d Downloading [============> ] 26.49MB/107.3MB c124ba1a8b26 Downloading [===================================> ] 64.88MB/91.87MB 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB d953cde4314b Extracting [==========================> ] 4.62MB/8.735MB 8b5292c940e1 Downloading [================> ] 20.54MB/63.48MB e0a9246a993d Extracting [==========================================> ] 61.83MB/71.91MB f836d47fdc4d Downloading [================> ] 35.68MB/107.3MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 96e38c8865ba Extracting [====================================> ] 51.81MB/71.91MB 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB c124ba1a8b26 Downloading [======================================> ] 70.29MB/91.87MB d953cde4314b Extracting [=========================================> ] 7.176MB/8.735MB 2b1b549e99de Pull complete 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 8b5292c940e1 Downloading [====================> ] 25.41MB/63.48MB e0a9246a993d Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB 96e38c8865ba Extracting [======================================> ] 55.15MB/71.91MB f836d47fdc4d Downloading [====================> ] 43.79MB/107.3MB d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB c124ba1a8b26 Downloading [========================================> ] 75.15MB/91.87MB d953cde4314b Pull complete aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 547372ea8ffa Extracting [===> ] 786.4kB/12.63MB 8b5292c940e1 Downloading [========================> ] 30.82MB/63.48MB 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB f836d47fdc4d Downloading [========================> ] 52.98MB/107.3MB e0a9246a993d Extracting [=================================================> ] 71.3MB/71.91MB c124ba1a8b26 Downloading [===========================================> ] 80.02MB/91.87MB e0a9246a993d Extracting [==================================================>] 71.91MB/71.91MB 547372ea8ffa Extracting [============> ] 3.146MB/12.63MB 8b5292c940e1 Downloading [============================> ] 36.22MB/63.48MB aecd4cb03450 Pull complete f836d47fdc4d Downloading [============================> ] 62.18MB/107.3MB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.95MB/71.91MB 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB c124ba1a8b26 Downloading [=============================================> ] 83.8MB/91.87MB e0a9246a993d Pull complete cfced280df5c Extracting [> ] 163.8kB/14.63MB 547372ea8ffa Extracting [======================> ] 5.767MB/12.63MB 8b5292c940e1 Downloading [================================> ] 41.63MB/63.48MB f836d47fdc4d Downloading [==================================> ] 73.53MB/107.3MB c124ba1a8b26 Downloading [===============================================> ] 87.59MB/91.87MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 67.4MB/71.91MB cfced280df5c Extracting [=> ] 327.7kB/14.63MB 13fa68ca8757 Pull complete 547372ea8ffa Extracting [=================================> ] 8.52MB/12.63MB 8b5292c940e1 Downloading [=====================================> ] 47.58MB/63.48MB f836d47fdc4d Downloading [=======================================> ] 83.8MB/107.3MB c124ba1a8b26 Downloading [=================================================> ] 90.83MB/91.87MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB cfced280df5c Extracting [============> ] 3.768MB/14.63MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 547372ea8ffa Extracting [===============================================> ] 12.06MB/12.63MB 8b5292c940e1 Downloading [==========================================> ] 53.53MB/63.48MB 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 454a4350d439 Downloading [==================================================>] 11.93kB/11.93kB 454a4350d439 Verifying Checksum 454a4350d439 Download complete f836d47fdc4d Downloading [===========================================> ] 94.08MB/107.3MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 547372ea8ffa Pull complete 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Verifying Checksum 9a8c18aee5ea Download complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B cfced280df5c Extracting [=================> ] 5.079MB/14.63MB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 8b5292c940e1 Downloading [==============================================> ] 59.47MB/63.48MB 9fa9226be034 Downloading [> ] 15.3kB/783kB f836d47fdc4d Downloading [================================================> ] 103.8MB/107.3MB f836d47fdc4d Verifying Checksum f836d47fdc4d Download complete 8b5292c940e1 Verifying Checksum 8b5292c940e1 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB cfced280df5c Extracting [===========================> ] 8.192MB/14.63MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 5e06c6bed798 Pull complete e5d7009d9e55 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 1617e25568b2 Download complete 9fa9226be034 Extracting [==================================================>] 783kB/783kB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete cfced280df5c Extracting [================================> ] 9.503MB/14.63MB 65d25c0f02f3 Extracting [====> ] 2.654MB/28.98MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB f836d47fdc4d Extracting [=> ] 2.785MB/107.3MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete f3b09c502777 Downloading [===> ] 4.324MB/56.52MB 6ac0e4adf315 Downloading [====> ] 5.406MB/62.07MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ec5fb03eaee Pull complete 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB cfced280df5c Extracting [=======================================> ] 11.63MB/14.63MB 65d25c0f02f3 Extracting [=========> ] 5.603MB/28.98MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete f836d47fdc4d Extracting [==> ] 5.014MB/107.3MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 6ac0e4adf315 Downloading [==========> ] 12.43MB/62.07MB f3b09c502777 Downloading [===========> ] 12.43MB/56.52MB 0d92cad902ba Pull complete cfced280df5c Extracting [============================================> ] 12.94MB/14.63MB 65d25c0f02f3 Extracting [===============> ] 8.847MB/28.98MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB cfced280df5c Extracting [==================================================>] 14.63MB/14.63MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB f836d47fdc4d Extracting [===> ] 7.799MB/107.3MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 6ac0e4adf315 Downloading [==============> ] 17.84MB/62.07MB d3165a332ae3 Pull complete f3b09c502777 Downloading [================> ] 18.38MB/56.52MB cfced280df5c Pull complete dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 9b25651cb121 Extracting [==================================================>] 1.074kB/1.074kB 9b25651cb121 Extracting [==================================================>] 1.074kB/1.074kB 1e017ebebdbd Downloading [====> ] 3.39MB/37.19MB 65d25c0f02f3 Extracting [==================> ] 10.91MB/28.98MB f836d47fdc4d Extracting [====> ] 10.03MB/107.3MB 1617e25568b2 Pull complete 6ac0e4adf315 Downloading [=================> ] 22.17MB/62.07MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB f3b09c502777 Downloading [=====================> ] 23.79MB/56.52MB dcc0c3b2850c Extracting [====> ] 6.128MB/76.12MB 65d25c0f02f3 Extracting [========================> ] 14.45MB/28.98MB 1e017ebebdbd Downloading [=================> ] 12.81MB/37.19MB 9b25651cb121 Pull complete dcd39227e97c Extracting [==================================================>] 5.241kB/5.241kB dcd39227e97c Extracting [==================================================>] 5.241kB/5.241kB f836d47fdc4d Extracting [=====> ] 12.26MB/107.3MB 6ac0e4adf315 Downloading [=====================> ] 27.03MB/62.07MB c124ba1a8b26 Extracting [====> ] 8.356MB/91.87MB f3b09c502777 Downloading [========================> ] 28.11MB/56.52MB dcc0c3b2850c Extracting [=========> ] 13.93MB/76.12MB 1e017ebebdbd Downloading [========================> ] 18.46MB/37.19MB 65d25c0f02f3 Extracting [===============================> ] 17.99MB/28.98MB f836d47fdc4d Extracting [=======> ] 16.15MB/107.3MB 6ac0e4adf315 Downloading [===========================> ] 33.52MB/62.07MB dcd39227e97c Pull complete f3b09c502777 Downloading [==============================> ] 34.06MB/56.52MB df64a37e7642 Extracting [==================================================>] 1.032kB/1.032kB df64a37e7642 Extracting [==================================================>] 1.032kB/1.032kB c124ba1a8b26 Extracting [========> ] 15.6MB/91.87MB dcc0c3b2850c Extracting [============> ] 19.5MB/76.12MB 1e017ebebdbd Downloading [=====================================> ] 27.88MB/37.19MB 65d25c0f02f3 Extracting [==========================================> ] 24.48MB/28.98MB 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 6ac0e4adf315 Downloading [================================> ] 40.01MB/62.07MB c124ba1a8b26 Extracting [============> ] 23.4MB/91.87MB f3b09c502777 Downloading [===================================> ] 40.55MB/56.52MB f836d47fdc4d Extracting [========> ] 17.27MB/107.3MB dcc0c3b2850c Extracting [===================> ] 28.97MB/76.12MB 65d25c0f02f3 Pull complete 1e017ebebdbd Downloading [================================================> ] 35.8MB/37.19MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 6ac0e4adf315 Downloading [=====================================> ] 45.96MB/62.07MB c124ba1a8b26 Extracting [================> ] 30.64MB/91.87MB f3b09c502777 Downloading [=========================================> ] 46.5MB/56.52MB dcc0c3b2850c Extracting [========================> ] 36.77MB/76.12MB 90dd78f85976 Extracting [> ] 426kB/41.49MB df64a37e7642 Pull complete 55f2b468da67 Downloading [> ] 539.6kB/257.9MB c124ba1a8b26 Extracting [==================> ] 33.42MB/91.87MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB e61887631f20 Extracting [==================================================>] 1.033kB/1.033kB e61887631f20 Extracting [==================================================>] 1.033kB/1.033kB f3b09c502777 Downloading [============================================> ] 50.28MB/56.52MB 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB f836d47fdc4d Extracting [========> ] 18.38MB/107.3MB dcc0c3b2850c Extracting [==========================> ] 40.11MB/76.12MB 90dd78f85976 Extracting [==> ] 2.13MB/41.49MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete c124ba1a8b26 Extracting [=====================> ] 40.11MB/91.87MB 55f2b468da67 Downloading [> ] 4.865MB/257.9MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 6ac0e4adf315 Downloading [================================================> ] 60.55MB/62.07MB f836d47fdc4d Extracting [=========> ] 21.17MB/107.3MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete dcc0c3b2850c Extracting [==============================> ] 46.24MB/76.12MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 90dd78f85976 Extracting [======> ] 5.112MB/41.49MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB 55f2b468da67 Downloading [=> ] 9.731MB/257.9MB 82bfc142787e Downloading [===> ] 588.7kB/8.613MB dcc0c3b2850c Extracting [===============================> ] 47.35MB/76.12MB 1e017ebebdbd Extracting [=======> ] 5.898MB/37.19MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 90dd78f85976 Extracting [======> ] 5.538MB/41.49MB f836d47fdc4d Extracting [==========> ] 22.84MB/107.3MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB c124ba1a8b26 Extracting [=========================> ] 46.24MB/91.87MB e61887631f20 Pull complete e6a09573dd7b Extracting [==================================================>] 19.52kB/19.52kB e6a09573dd7b Extracting [==================================================>] 19.52kB/19.52kB 55f2b468da67 Downloading [===> ] 15.68MB/257.9MB 82bfc142787e Downloading [=====================================> ] 6.389MB/8.613MB dcc0c3b2850c Extracting [===================================> ] 54.03MB/76.12MB 1e017ebebdbd Extracting [===========> ] 8.258MB/37.19MB b0e0ef7895f4 Downloading [=====> ] 3.767MB/37.01MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete f836d47fdc4d Extracting [===========> ] 25.62MB/107.3MB 90dd78f85976 Extracting [=========> ] 7.668MB/41.49MB c124ba1a8b26 Extracting [===========================> ] 50.14MB/91.87MB 6ac0e4adf315 Extracting [==> ] 3.342MB/62.07MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 55f2b468da67 Downloading [=====> ] 27.03MB/257.9MB dcc0c3b2850c Extracting [=========================================> ] 63.5MB/76.12MB b0e0ef7895f4 Downloading [=========> ] 7.159MB/37.01MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB c124ba1a8b26 Extracting [================================> ] 59.05MB/91.87MB 90dd78f85976 Extracting [============> ] 10.22MB/41.49MB e6a09573dd7b Pull complete f836d47fdc4d Extracting [==============> ] 30.08MB/107.3MB policy-db-migrator Pulled 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 6ac0e4adf315 Extracting [====> ] 5.571MB/62.07MB 55f2b468da67 Downloading [======> ] 35.68MB/257.9MB dcc0c3b2850c Extracting [==============================================> ] 70.19MB/76.12MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete b0e0ef7895f4 Downloading [===============> ] 11.68MB/37.01MB 1e017ebebdbd Extracting [=================> ] 12.98MB/37.19MB c124ba1a8b26 Extracting [====================================> ] 66.29MB/91.87MB 90dd78f85976 Extracting [===============> ] 13.21MB/41.49MB f836d47fdc4d Extracting [===============> ] 33.42MB/107.3MB 6ac0e4adf315 Extracting [======> ] 7.799MB/62.07MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 55f2b468da67 Downloading [=========> ] 50.28MB/257.9MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB b0e0ef7895f4 Downloading [========================> ] 18.09MB/37.01MB 1e017ebebdbd Extracting [=======================> ] 17.3MB/37.19MB c124ba1a8b26 Extracting [=======================================> ] 71.86MB/91.87MB dcc0c3b2850c Pull complete 90dd78f85976 Extracting [====================> ] 17.04MB/41.49MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB f836d47fdc4d Extracting [=================> ] 36.77MB/107.3MB 55f2b468da67 Downloading [============> ] 63.26MB/257.9MB 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB b0e0ef7895f4 Downloading [===============================> ] 23.36MB/37.01MB 90dd78f85976 Extracting [=======================> ] 19.17MB/41.49MB 1e017ebebdbd Extracting [============================> ] 21.23MB/37.19MB c124ba1a8b26 Extracting [==========================================> ] 77.43MB/91.87MB 55f2b468da67 Downloading [==============> ] 75.15MB/257.9MB f836d47fdc4d Extracting [==================> ] 39.55MB/107.3MB eb7cda286a15 Pull complete 09d5a3f70313 Downloading [=====> ] 12.98MB/109.2MB api Pulled 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB b0e0ef7895f4 Downloading [======================================> ] 28.26MB/37.01MB 1e017ebebdbd Extracting [=================================> ] 25.17MB/37.19MB 90dd78f85976 Extracting [====================================> ] 30.24MB/41.49MB c124ba1a8b26 Extracting [=============================================> ] 84.12MB/91.87MB 55f2b468da67 Downloading [================> ] 85.43MB/257.9MB f836d47fdc4d Extracting [====================> ] 43.45MB/107.3MB 09d5a3f70313 Downloading [========> ] 19.46MB/109.2MB b0e0ef7895f4 Downloading [=============================================> ] 33.54MB/37.01MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB 90dd78f85976 Extracting [=========================================> ] 34.08MB/41.49MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 55f2b468da67 Downloading [===================> ] 98.4MB/257.9MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete f836d47fdc4d Extracting [======================> ] 47.35MB/107.3MB 09d5a3f70313 Downloading [===========> ] 25.41MB/109.2MB 6ac0e4adf315 Extracting [=================> ] 22.28MB/62.07MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 1e017ebebdbd Extracting [===========================================> ] 32.64MB/37.19MB 90dd78f85976 Extracting [=============================================> ] 37.91MB/41.49MB 55f2b468da67 Downloading [=====================> ] 109.2MB/257.9MB f836d47fdc4d Extracting [=======================> ] 51.25MB/107.3MB 09d5a3f70313 Downloading [==============> ] 31.9MB/109.2MB 6394804c2196 Pull complete 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB pap Pulled 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 55f2b468da67 Downloading [=======================> ] 118.9MB/257.9MB 90dd78f85976 Pull complete 4f4fb700ef54 Extracting [==================================================>] 32B/32B 4f4fb700ef54 Extracting [==================================================>] 32B/32B f836d47fdc4d Extracting [=========================> ] 54.03MB/107.3MB 09d5a3f70313 Downloading [=================> ] 37.85MB/109.2MB 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 2d429b9e73a6 Downloading [====> ] 2.653MB/29.13MB 55f2b468da67 Downloading [========================> ] 128.7MB/257.9MB f836d47fdc4d Extracting [===========================> ] 57.93MB/107.3MB 09d5a3f70313 Downloading [====================> ] 44.33MB/109.2MB 1e017ebebdbd Pull complete 4f4fb700ef54 Pull complete opa-pdp Pulled 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 2d429b9e73a6 Downloading [=========> ] 5.602MB/29.13MB 55f2b468da67 Downloading [===========================> ] 139.5MB/257.9MB 09d5a3f70313 Downloading [=======================> ] 51.36MB/109.2MB f836d47fdc4d Extracting [=============================> ] 63.5MB/107.3MB 6ac0e4adf315 Extracting [================================> ] 40.11MB/62.07MB 2d429b9e73a6 Downloading [==============> ] 8.551MB/29.13MB 55f2b468da67 Downloading [=============================> ] 150.8MB/257.9MB 09d5a3f70313 Downloading [==========================> ] 58.39MB/109.2MB f836d47fdc4d Extracting [===============================> ] 68.52MB/107.3MB 6ac0e4adf315 Extracting [===========================================> ] 54.03MB/62.07MB 2d429b9e73a6 Downloading [===================> ] 11.5MB/29.13MB 55f2b468da67 Downloading [===============================> ] 161.7MB/257.9MB 09d5a3f70313 Downloading [=============================> ] 65.42MB/109.2MB f836d47fdc4d Extracting [==================================> ] 74.09MB/107.3MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 55f2b468da67 Downloading [=================================> ] 173MB/257.9MB 2d429b9e73a6 Downloading [=========================> ] 15.04MB/29.13MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 09d5a3f70313 Downloading [=================================> ] 72.45MB/109.2MB f836d47fdc4d Extracting [====================================> ] 79.1MB/107.3MB 6ac0e4adf315 Pull complete 55f2b468da67 Downloading [===================================> ] 183.3MB/257.9MB 2d429b9e73a6 Downloading [===============================> ] 18.28MB/29.13MB 09d5a3f70313 Downloading [====================================> ] 79.48MB/109.2MB f836d47fdc4d Extracting [=======================================> ] 84.12MB/107.3MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 55f2b468da67 Downloading [=====================================> ] 194.6MB/257.9MB 2d429b9e73a6 Downloading [=====================================> ] 21.82MB/29.13MB 09d5a3f70313 Downloading [=======================================> ] 85.97MB/109.2MB f836d47fdc4d Extracting [==========================================> ] 90.24MB/107.3MB f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 55f2b468da67 Downloading [=======================================> ] 205.5MB/257.9MB 2d429b9e73a6 Downloading [===========================================> ] 25.36MB/29.13MB 09d5a3f70313 Downloading [==========================================> ] 92.99MB/109.2MB f836d47fdc4d Extracting [=============================================> ] 98.04MB/107.3MB 55f2b468da67 Downloading [==========================================> ] 216.8MB/257.9MB f3b09c502777 Extracting [========> ] 9.47MB/56.52MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 09d5a3f70313 Downloading [==============================================> ] 100.6MB/109.2MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Download complete f836d47fdc4d Extracting [===============================================> ] 102.5MB/107.3MB 55f2b468da67 Downloading [============================================> ] 228.2MB/257.9MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 09d5a3f70313 Downloading [=================================================> ] 108.7MB/109.2MB 09d5a3f70313 Downloading [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Download complete c4d302cc468d Downloading [> ] 48.06kB/4.534MB 55f2b468da67 Downloading [==============================================> ] 239.5MB/257.9MB 2d429b9e73a6 Extracting [======> ] 3.539MB/29.13MB f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB f836d47fdc4d Extracting [================================================> ] 104.2MB/107.3MB c4d302cc468d Downloading [===============> ] 1.424MB/4.534MB 55f2b468da67 Downloading [=================================================> ] 253.6MB/257.9MB 2d429b9e73a6 Extracting [============> ] 7.078MB/29.13MB f3b09c502777 Extracting [================> ] 18.94MB/56.52MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 01e0882c90d9 Downloading [=============================> ] 867.3kB/1.447MB f836d47fdc4d Extracting [=================================================> ] 105.8MB/107.3MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete c4d302cc468d Downloading [==============================================> ] 4.226MB/4.534MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 2d429b9e73a6 Extracting [================> ] 9.732MB/29.13MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete f836d47fdc4d Pull complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 2d429b9e73a6 Extracting [=====================> ] 12.39MB/29.13MB 531ee2cf3c0c Downloading [===========================> ] 4.423MB/8.066MB f3b09c502777 Extracting [=====================> ] 23.95MB/56.52MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 55f2b468da67 Extracting [==> ] 11.7MB/257.9MB 787d6bee9571 Download complete 531ee2cf3c0c Downloading [==================================================>] 8.066MB/8.066MB 531ee2cf3c0c Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Download complete 2d429b9e73a6 Extracting [=============================> ] 17.4MB/29.13MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB e73cb4a42719 Downloading [====> ] 9.731MB/109.1MB f3b09c502777 Extracting [=======================> ] 26.74MB/56.52MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB eca0188f477e Downloading [===> ] 2.26MB/37.17MB e73cb4a42719 Downloading [==========> ] 22.17MB/109.1MB f3b09c502777 Extracting [=============================> ] 33.42MB/56.52MB 55f2b468da67 Extracting [====> ] 23.95MB/257.9MB eabd8714fec9 Downloading [> ] 2.702MB/375MB e73cb4a42719 Downloading [=================> ] 37.31MB/109.1MB f3b09c502777 Extracting [======================================> ] 43.45MB/56.52MB 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB eca0188f477e Downloading [======> ] 4.521MB/37.17MB 55f2b468da67 Extracting [=====> ] 26.18MB/257.9MB eabd8714fec9 Downloading [> ] 4.324MB/375MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB f3b09c502777 Extracting [==============================================> ] 52.36MB/56.52MB e73cb4a42719 Downloading [========================> ] 52.98MB/109.1MB eca0188f477e Downloading [=========> ] 7.159MB/37.17MB 55f2b468da67 Extracting [======> ] 33.42MB/257.9MB eabd8714fec9 Downloading [> ] 7.028MB/375MB 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB e73cb4a42719 Downloading [==============================> ] 67.58MB/109.1MB eca0188f477e Downloading [================> ] 12.06MB/37.17MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB 55f2b468da67 Extracting [========> ] 42.34MB/257.9MB eabd8714fec9 Downloading [=> ] 11.89MB/375MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB e73cb4a42719 Downloading [=====================================> ] 81.64MB/109.1MB eca0188f477e Downloading [=========================> ] 19.22MB/37.17MB 55f2b468da67 Extracting [=========> ] 50.69MB/257.9MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Downloading [==> ] 16.76MB/375MB e73cb4a42719 Downloading [=============================================> ] 98.4MB/109.1MB eca0188f477e Downloading [===================================> ] 26.38MB/37.17MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 55f2b468da67 Extracting [===========> ] 58.49MB/257.9MB 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB eabd8714fec9 Downloading [==> ] 21.63MB/375MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 408012a7b118 Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eca0188f477e Downloading [=============================================> ] 33.91MB/37.17MB 55f2b468da67 Extracting [============> ] 64.62MB/257.9MB eca0188f477e Verifying Checksum eca0188f477e Download complete 8f10199ed94b Downloading [> ] 97.22kB/8.768MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB eabd8714fec9 Downloading [===> ] 27.03MB/375MB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 55f2b468da67 Extracting [==============> ] 75.2MB/257.9MB 8f10199ed94b Downloading [======================> ] 4.029MB/8.768MB 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB eabd8714fec9 Downloading [====> ] 31.9MB/375MB 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB f3a82e9f1761 Downloading [====> ] 3.669MB/44.41MB eca0188f477e Extracting [=====> ] 3.932MB/37.17MB 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB 8f10199ed94b Downloading [===============================================> ] 8.355MB/8.768MB bf70c5107ab5 Pull complete 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete eabd8714fec9 Downloading [====> ] 36.76MB/375MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 8b5292c940e1 Extracting [=========> ] 12.26MB/63.48MB f3a82e9f1761 Downloading [=========> ] 8.715MB/44.41MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 55f2b468da67 Extracting [==================> ] 96.93MB/257.9MB 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete eca0188f477e Extracting [========> ] 6.685MB/37.17MB c4d302cc468d Extracting [====================================> ] 3.277MB/4.534MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete eabd8714fec9 Downloading [=====> ] 43.79MB/375MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 1ccde423731d Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 8b5292c940e1 Extracting [===========> ] 14.48MB/63.48MB f3a82e9f1761 Downloading [==================> ] 16.06MB/44.41MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 55f2b468da67 Extracting [====================> ] 105.3MB/257.9MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eca0188f477e Extracting [=============> ] 10.22MB/37.17MB c4d302cc468d Pull complete eabd8714fec9 Downloading [=======> ] 53.53MB/375MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB f3a82e9f1761 Downloading [===========================> ] 24.77MB/44.41MB 8b5292c940e1 Extracting [============> ] 16.15MB/63.48MB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB 7221d93db8a9 Pull complete eca0188f477e Extracting [=================> ] 12.98MB/37.17MB 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B eabd8714fec9 Downloading [========> ] 64.34MB/375MB da3ed5db7103 Downloading [==> ] 7.028MB/127.4MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB f3a82e9f1761 Downloading [======================================> ] 33.95MB/44.41MB 8b5292c940e1 Extracting [=============> ] 17.27MB/63.48MB 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 7df673c7455d Pull complete eca0188f477e Extracting [=====================> ] 15.73MB/37.17MB eabd8714fec9 Downloading [=========> ] 72.45MB/375MB prometheus Pulled 01e0882c90d9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB da3ed5db7103 Downloading [======> ] 15.68MB/127.4MB f3a82e9f1761 Downloading [=================================================> ] 44.04MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete 8b5292c940e1 Extracting [================> ] 20.61MB/63.48MB 55f2b468da67 Extracting [======================> ] 117.5MB/257.9MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete eca0188f477e Extracting [============================> ] 21.23MB/37.17MB eabd8714fec9 Downloading [==========> ] 82.18MB/375MB da3ed5db7103 Downloading [=========> ] 24.33MB/127.4MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 8b5292c940e1 Extracting [==================> ] 23.4MB/63.48MB 55f2b468da67 Extracting [=======================> ] 121.4MB/257.9MB eca0188f477e Extracting [===================================> ] 26.35MB/37.17MB eabd8714fec9 Downloading [============> ] 91.91MB/375MB da3ed5db7103 Downloading [============> ] 32.98MB/127.4MB 531ee2cf3c0c Extracting [============================> ] 4.522MB/8.066MB 8b5292c940e1 Extracting [====================> ] 26.18MB/63.48MB 55f2b468da67 Extracting [========================> ] 125.3MB/257.9MB eabd8714fec9 Downloading [=============> ] 102.2MB/375MB eca0188f477e Extracting [=========================================> ] 30.67MB/37.17MB da3ed5db7103 Downloading [=================> ] 43.79MB/127.4MB 531ee2cf3c0c Extracting [=======================================> ] 6.39MB/8.066MB 8b5292c940e1 Extracting [======================> ] 28.97MB/63.48MB 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eabd8714fec9 Downloading [===============> ] 114.6MB/375MB da3ed5db7103 Downloading [======================> ] 56.77MB/127.4MB eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 8b5292c940e1 Extracting [========================> ] 30.64MB/63.48MB 55f2b468da67 Extracting [=========================> ] 130.9MB/257.9MB eabd8714fec9 Downloading [================> ] 124.4MB/375MB da3ed5db7103 Downloading [==========================> ] 67.04MB/127.4MB eca0188f477e Extracting [===============================================> ] 35MB/37.17MB 531ee2cf3c0c Pull complete eabd8714fec9 Downloading [=================> ] 133MB/375MB 8b5292c940e1 Extracting [=========================> ] 31.75MB/63.48MB da3ed5db7103 Downloading [============================> ] 71.91MB/127.4MB 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB eca0188f477e Extracting [=================================================> ] 36.57MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [===================> ] 145.4MB/375MB 8b5292c940e1 Extracting [==========================> ] 33.42MB/63.48MB da3ed5db7103 Downloading [==================================> ] 88.67MB/127.4MB 55f2b468da67 Extracting [==========================> ] 137MB/257.9MB ed54a7dee1d8 Extracting [=======================================> ] 950.3kB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB eabd8714fec9 Downloading [=====================> ] 160.6MB/375MB da3ed5db7103 Downloading [=========================================> ] 104.9MB/127.4MB 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB 55f2b468da67 Extracting [===========================> ] 141.5MB/257.9MB da3ed5db7103 Downloading [===========================================> ] 111.4MB/127.4MB eabd8714fec9 Downloading [======================> ] 171.4MB/375MB 8b5292c940e1 Extracting [=============================> ] 37.32MB/63.48MB 55f2b468da67 Extracting [===========================> ] 144.3MB/257.9MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete eabd8714fec9 Downloading [=========================> ] 188.7MB/375MB 8b5292c940e1 Extracting [===============================> ] 40.11MB/63.48MB 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB eabd8714fec9 Downloading [===========================> ] 207.6MB/375MB 8b5292c940e1 Extracting [==================================> ] 44.01MB/63.48MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB eabd8714fec9 Downloading [=============================> ] 220.1MB/375MB 8b5292c940e1 Extracting [====================================> ] 46.79MB/63.48MB 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB eabd8714fec9 Downloading [===============================> ] 235.7MB/375MB eca0188f477e Pull complete 8b5292c940e1 Extracting [=======================================> ] 50.14MB/63.48MB 55f2b468da67 Extracting [===============================> ] 160.4MB/257.9MB eabd8714fec9 Downloading [=================================> ] 254.1MB/375MB 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB 8b5292c940e1 Extracting [=========================================> ] 52.92MB/63.48MB ed54a7dee1d8 Pull complete eabd8714fec9 Downloading [===================================> ] 265.5MB/375MB 55f2b468da67 Extracting [================================> ] 167.1MB/257.9MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 8b5292c940e1 Extracting [==========================================> ] 54.59MB/63.48MB eabd8714fec9 Downloading [=====================================> ] 279.5MB/375MB 55f2b468da67 Extracting [================================> ] 169.9MB/257.9MB 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB eabd8714fec9 Downloading [=======================================> ] 293.6MB/375MB eabd8714fec9 Downloading [=======================================> ] 296.3MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB eabd8714fec9 Downloading [=========================================> ] 310.3MB/375MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB eabd8714fec9 Downloading [==========================================> ] 321.7MB/375MB 8b5292c940e1 Extracting [==============================================> ] 59.6MB/63.48MB e444bcd4d577 Pull complete 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB eabd8714fec9 Downloading [===========================================> ] 329.3MB/375MB 8b5292c940e1 Extracting [================================================> ] 61.83MB/63.48MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB eabd8714fec9 Downloading [=============================================> ] 342.8MB/375MB 12c5c803443f Pull complete 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 55f2b468da67 Extracting [=================================> ] 173.2MB/257.9MB eabd8714fec9 Downloading [===============================================> ] 352.5MB/375MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB eabd8714fec9 Downloading [=================================================> ] 370.4MB/375MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB 8b5292c940e1 Pull complete e27c75a98748 Pull complete eabd8714fec9 Extracting [> ] 557.1kB/375MB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 55f2b468da67 Extracting [==================================> ] 178.3MB/257.9MB eabd8714fec9 Extracting [> ] 7.242MB/375MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 55f2b468da67 Extracting [===================================> ] 182.2MB/257.9MB eabd8714fec9 Extracting [==> ] 17.27MB/375MB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB 55f2b468da67 Extracting [====================================> ] 187.7MB/257.9MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB e73cb4a42719 Extracting [====> ] 10.58MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 192.7MB/257.9MB e73cb4a42719 Extracting [======> ] 13.37MB/109.1MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB eabd8714fec9 Extracting [===> ] 24.51MB/375MB e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB eabd8714fec9 Extracting [====> ] 35.09MB/375MB e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB eabd8714fec9 Extracting [=====> ] 40.11MB/375MB e73cb4a42719 Extracting [=========> ] 20.61MB/109.1MB eabd8714fec9 Extracting [======> ] 45.12MB/375MB eabd8714fec9 Extracting [======> ] 45.68MB/375MB e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 454a4350d439 Pull complete 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB eabd8714fec9 Extracting [======> ] 50.69MB/375MB 55f2b468da67 Extracting [======================================> ] 197.2MB/257.9MB e73cb4a42719 Extracting [============> ] 27.3MB/109.1MB eabd8714fec9 Extracting [========> ] 62.39MB/375MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB eabd8714fec9 Extracting [==========> ] 77.43MB/375MB e73cb4a42719 Extracting [==============> ] 32.31MB/109.1MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB eabd8714fec9 Extracting [===========> ] 87.46MB/375MB e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB eabd8714fec9 Extracting [============> ] 90.24MB/375MB e73cb4a42719 Extracting [====================> ] 44.56MB/109.1MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB e73cb4a42719 Extracting [=====================> ] 47.91MB/109.1MB eabd8714fec9 Extracting [==============> ] 107MB/375MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB eabd8714fec9 Extracting [==============> ] 111.4MB/375MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB eabd8714fec9 Extracting [===============> ] 115.3MB/375MB 55f2b468da67 Extracting [=========================================> ] 212.8MB/257.9MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB e73cb4a42719 Extracting [==========================> ] 57.38MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB eabd8714fec9 Extracting [================> ] 125.3MB/375MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB eabd8714fec9 Extracting [=================> ] 130.4MB/375MB e73cb4a42719 Extracting [=============================> ] 64.62MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB eabd8714fec9 Extracting [==================> ] 135.9MB/375MB e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB eabd8714fec9 Extracting [==================> ] 139.8MB/375MB e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB 9a8c18aee5ea Pull complete eabd8714fec9 Extracting [===================> ] 144.8MB/375MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB e73cb4a42719 Extracting [=====================================> ] 80.77MB/109.1MB eabd8714fec9 Extracting [===================> ] 148.2MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB e73cb4a42719 Extracting [=======================================> ] 85.79MB/109.1MB eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB e73cb4a42719 Extracting [========================================> ] 88.01MB/109.1MB eabd8714fec9 Extracting [====================> ] 152.6MB/375MB e73cb4a42719 Extracting [========================================> ] 88.57MB/109.1MB 55f2b468da67 Extracting [=============================================> ] 233.4MB/257.9MB eabd8714fec9 Extracting [====================> ] 153.2MB/375MB e73cb4a42719 Extracting [==========================================> ] 92.47MB/109.1MB eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB eabd8714fec9 Extracting [=====================> ] 161.5MB/375MB 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB eabd8714fec9 Extracting [========================> ] 184.4MB/375MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB grafana Pulled eabd8714fec9 Extracting [=========================> ] 194.4MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [===========================> ] 203.9MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB eabd8714fec9 Extracting [============================> ] 215.6MB/375MB 55f2b468da67 Extracting [=================================================> ] 252.9MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 256.2MB/257.9MB eabd8714fec9 Extracting [=============================> ] 219.5MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB eabd8714fec9 Extracting [=============================> ] 222.3MB/375MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [==============================> ] 227.3MB/375MB eabd8714fec9 Extracting [==============================> ] 231.2MB/375MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB eabd8714fec9 Extracting [================================> ] 240.1MB/375MB 55f2b468da67 Pull complete e73cb4a42719 Pull complete eabd8714fec9 Extracting [================================> ] 245.1MB/375MB eabd8714fec9 Extracting [=================================> ] 249MB/375MB eabd8714fec9 Extracting [=================================> ] 253.5MB/375MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [==================================> ] 259MB/375MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 82bfc142787e Extracting [> ] 98.3kB/8.613MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 82bfc142787e Extracting [======================> ] 3.932MB/8.613MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB eabd8714fec9 Extracting [====================================> ] 275.7MB/375MB eabd8714fec9 Extracting [=====================================> ] 281.9MB/375MB eabd8714fec9 Extracting [======================================> ] 288MB/375MB eabd8714fec9 Extracting [======================================> ] 289.7MB/375MB eabd8714fec9 Extracting [=======================================> ] 294.1MB/375MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB a83b68436f09 Pull complete eabd8714fec9 Extracting [========================================> ] 301.4MB/375MB eabd8714fec9 Extracting [========================================> ] 303.6MB/375MB eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB eabd8714fec9 Extracting [==========================================> ] 319.2MB/375MB eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 82bfc142787e Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB eabd8714fec9 Extracting [=============================================> ] 337.6MB/375MB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 787d6bee9571 Pull complete 46baca71a4ef Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB b0e0ef7895f4 Extracting [===============================> ] 23.59MB/37.01MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB b0e0ef7895f4 Pull complete 7e568a0dc8fb Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [===============================================> ] 352.6MB/375MB postgres Pulled c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 5cfb27c10ea5 Pull complete eabd8714fec9 Extracting [================================================> ] 362.1MB/375MB 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB e040ea11fa10 Pull complete 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 09d5a3f70313 Extracting [=> ] 3.342MB/109.2MB eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09d5a3f70313 Extracting [=======> ] 17.27MB/109.2MB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09d5a3f70313 Extracting [===============> ] 34.54MB/109.2MB 8f10199ed94b Extracting [====================> ] 3.539MB/8.768MB 09d5a3f70313 Extracting [========================> ] 54.03MB/109.2MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 09d5a3f70313 Extracting [=================================> ] 72.42MB/109.2MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09d5a3f70313 Extracting [==========================================> ] 92.47MB/109.2MB f963a77d2726 Pull complete 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB f3a82e9f1761 Extracting [============> ] 11.01MB/44.41MB f3a82e9f1761 Extracting [=========================> ] 22.94MB/44.41MB 356f5c2c843b Pull complete kafka Pulled f3a82e9f1761 Extracting [========================================> ] 36.24MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 14.48MB/127.4MB da3ed5db7103 Extracting [============> ] 32.31MB/127.4MB da3ed5db7103 Extracting [===================> ] 49.02MB/127.4MB da3ed5db7103 Extracting [=========================> ] 65.73MB/127.4MB da3ed5db7103 Extracting [================================> ] 81.89MB/127.4MB da3ed5db7103 Extracting [======================================> ] 99.16MB/127.4MB da3ed5db7103 Extracting [=============================================> ] 117MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 122MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 127MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container postgres Created Container prometheus Created Container grafana Creating Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-opa-pdp Creating Container policy-opa-pdp Created Container prometheus Starting Container zookeeper Starting Container postgres Starting Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container prometheus Started Container grafana Starting Container grafana Started Container zookeeper Started Container kafka Starting Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-opa-pdp Starting Container policy-opa-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 3 minutes for OPA-PDP to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Checking if REST port 30012 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:8d4092bf6feb44aa30363f870da27302bbe5a67af15306bbb1708b27bb65d423 top - 07:13:46 up 6 min, 0 users, load average: 1.09, 1.08, 0.55 Tasks: 218 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.4 sy, 0.0 ni, 84.5 id, 2.2 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.4G 21G 28M 7.3G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9996fc592968 policy-opa-pdp 0.21% 12.95MiB / 31.41GiB 0.04% 80.6kB / 78kB 0B / 0B 21 8de9e946534d policy-pap 1.01% 537.6MiB / 31.41GiB 1.67% 2.21MB / 1.26MB 0B / 139MB 68 c61cdb41e948 policy-api 0.09% 401.6MiB / 31.41GiB 1.25% 1.15MB / 1.09MB 0B / 0B 59 6326b66cbe87 kafka 2.37% 412.3MiB / 31.41GiB 1.28% 307kB / 291kB 0B / 754kB 83 a3460188858d grafana 0.18% 113.2MiB / 31.41GiB 0.35% 19.1MB / 148kB 0B / 31.3MB 23 69220571dcc0 zookeeper 0.12% 85.83MiB / 31.41GiB 0.27% 57.8kB / 49.8kB 0B / 397kB 62 4011a741b6fc prometheus 0.00% 22.02MiB / 31.41GiB 0.07% 277kB / 12.4kB 0B / 0B 13 0655d6763b26 postgres 0.02% 85.68MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 102kB / 159MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-13T07:10:03.714014442Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T07:10:03Z grafana | logger=settings t=2025-06-13T07:10:03.714310827Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-13T07:10:03.714318307Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-13T07:10:03.714322277Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-13T07:10:03.714325617Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-13T07:10:03.714328767Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T07:10:03.714331797Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T07:10:03.714334877Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-13T07:10:03.714338307Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-13T07:10:03.714341647Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-13T07:10:03.714344577Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-13T07:10:03.714347208Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-13T07:10:03.714351098Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-13T07:10:03.714357518Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-13T07:10:03.714360668Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-13T07:10:03.714363598Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-13T07:10:03.714367438Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-13T07:10:03.714371149Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-13T07:10:03.714374349Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-13T07:10:03.714712755Z level=info msg=FeatureToggles prometheusAzureOverrideAudience=true onPremToCloudMigrations=true recordedQueriesMulti=true azureMonitorEnableUserAuth=true alertingUIOptimizeReducer=true influxdbBackendMigration=true pluginsDetailsRightPanel=true logsContextDatasourceUi=true awsAsyncQueryCaching=true publicDashboardsScene=true recoveryThreshold=true angularDeprecationUI=true panelMonitoring=true dataplaneFrontendFallback=true annotationPermissionUpdate=true alertingSimplifiedRouting=true alertRuleRestore=true newFiltersUI=true alertingRuleVersionHistoryRestore=true groupToNestedTableTransformation=true alertingQueryAndExpressionsStepMode=true logRowsPopoverMenu=true tlsMemcached=true unifiedRequestLog=true alertingRuleRecoverDeleted=true preinstallAutoUpdate=true addFieldFromCalculationStatFunctions=true azureMonitorPrometheusExemplars=true unifiedStorageSearchPermissionFiltering=true failWrongDSUID=true dashboardSceneForViewers=true cloudWatchRoundUpEndTime=true lokiQuerySplitting=true newPDFRendering=true useSessionStorageForRedirection=true lokiQueryHints=true correlations=true grafanaconThemes=true promQLScope=true pinNavItems=true alertingApiServer=true dashboardSceneSolo=true lokiStructuredMetadata=true nestedFolders=true dashboardScene=true logsExploreTableVisualisation=true alertingNotificationsStepMode=true prometheusUsesCombobox=true formatString=true ssoSettingsSAML=true externalCorePlugins=true transformationsRedesign=true cloudWatchNewLabelParsing=true logsInfiniteScrolling=true lokiLabelNamesQueryApi=true alertingRulePermanentlyDelete=true dashgpt=true alertingInsights=true kubernetesClientDashboardsFolders=true cloudWatchCrossAccountQuerying=true kubernetesPlaylists=true reportingUseRawTimeRange=true logsPanelControls=true ssoSettingsApi=true newDashboardSharingComponent=true grafana | logger=sqlstore t=2025-06-13T07:10:03.714775186Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-13T07:10:03.714788526Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-13T07:10:03.716285314Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-13T07:10:03.716301055Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-13T07:10:03.716968707Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-13T07:10:03.717797933Z level=info msg="Migration successfully executed" id="create migration_log table" duration=828.616µs grafana | logger=migrator t=2025-06-13T07:10:03.728622489Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-13T07:10:03.729848182Z level=info msg="Migration successfully executed" id="create user table" duration=1.224993ms grafana | logger=migrator t=2025-06-13T07:10:03.763578851Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-13T07:10:03.764855236Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.278935ms grafana | logger=migrator t=2025-06-13T07:10:03.778722428Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-13T07:10:03.779608705Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=886.227µs grafana | logger=migrator t=2025-06-13T07:10:03.790608883Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-13T07:10:03.791774825Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.166782ms grafana | logger=migrator t=2025-06-13T07:10:03.802758054Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-13T07:10:03.804499556Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.740712ms grafana | logger=migrator t=2025-06-13T07:10:03.817352581Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-13T07:10:03.819874588Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.522008ms grafana | logger=migrator t=2025-06-13T07:10:03.826218728Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-13T07:10:03.827071554Z level=info msg="Migration successfully executed" id="create user table v2" duration=852.396µs grafana | logger=migrator t=2025-06-13T07:10:03.846233608Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-13T07:10:03.847809937Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.575479ms grafana | logger=migrator t=2025-06-13T07:10:03.857750765Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-13T07:10:03.858456599Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=706.214µs grafana | logger=migrator t=2025-06-13T07:10:03.872379583Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:03.873197728Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=817.845µs grafana | logger=migrator t=2025-06-13T07:10:03.880559018Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-13T07:10:03.881415654Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=856.236µs grafana | logger=migrator t=2025-06-13T07:10:03.892134697Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-13T07:10:03.894047444Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.911426ms grafana | logger=migrator t=2025-06-13T07:10:03.928473576Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-13T07:10:03.928511686Z level=info msg="Migration successfully executed" id="Update user table charset" duration=38.69µs grafana | logger=migrator t=2025-06-13T07:10:03.940212688Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-13T07:10:03.94130708Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.094292ms grafana | logger=migrator t=2025-06-13T07:10:03.956004458Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-13T07:10:03.956585928Z level=info msg="Migration successfully executed" id="Add missing user data" duration=582.04µs grafana | logger=migrator t=2025-06-13T07:10:03.970065335Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-13T07:10:03.971251126Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.185431ms grafana | logger=migrator t=2025-06-13T07:10:03.994560769Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-13T07:10:03.996136908Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.576549ms grafana | logger=migrator t=2025-06-13T07:10:04.014171749Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-13T07:10:04.016642344Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.469736ms grafana | logger=migrator t=2025-06-13T07:10:04.028094269Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-13T07:10:04.039294919Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.20066ms grafana | logger=migrator t=2025-06-13T07:10:04.049586411Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-13T07:10:04.051394145Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.807074ms grafana | logger=migrator t=2025-06-13T07:10:04.071512482Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-13T07:10:04.072106252Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=593.66µs grafana | logger=migrator t=2025-06-13T07:10:04.079734945Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-13T07:10:04.080859946Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.125771ms grafana | logger=migrator t=2025-06-13T07:10:04.094482891Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-13T07:10:04.096356816Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.811844ms grafana | logger=migrator t=2025-06-13T07:10:04.118777346Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-13T07:10:04.1195386Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=765.905µs grafana | logger=migrator t=2025-06-13T07:10:04.124101756Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-13T07:10:04.125005162Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=903.536µs grafana | logger=migrator t=2025-06-13T07:10:04.13503912Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-13T07:10:04.135477338Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=435.048µs grafana | logger=migrator t=2025-06-13T07:10:04.144741211Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-13T07:10:04.145460825Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=719.574µs grafana | logger=migrator t=2025-06-13T07:10:04.168221631Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-13T07:10:04.170030675Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.809143ms grafana | logger=migrator t=2025-06-13T07:10:04.17407732Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-13T07:10:04.17507644Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.00017ms grafana | logger=migrator t=2025-06-13T07:10:04.178630566Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-13T07:10:04.179321789Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=691.023µs grafana | logger=migrator t=2025-06-13T07:10:04.186468653Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-13T07:10:04.18793441Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.468717ms grafana | logger=migrator t=2025-06-13T07:10:04.191936175Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-13T07:10:04.193463644Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.529809ms grafana | logger=migrator t=2025-06-13T07:10:04.198878155Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-13T07:10:04.198960077Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=42.891µs grafana | logger=migrator t=2025-06-13T07:10:04.202797348Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-13T07:10:04.203835358Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.03776ms grafana | logger=migrator t=2025-06-13T07:10:04.210695655Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.211414379Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=718.854µs grafana | logger=migrator t=2025-06-13T07:10:04.217462622Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-13T07:10:04.218878539Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.415937ms grafana | logger=migrator t=2025-06-13T07:10:04.226566873Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-13T07:10:04.227464259Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=898.936µs grafana | logger=migrator t=2025-06-13T07:10:04.232263059Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:04.236720953Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.456814ms grafana | logger=migrator t=2025-06-13T07:10:04.240386172Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-13T07:10:04.241511792Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.12559ms grafana | logger=migrator t=2025-06-13T07:10:04.246850392Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-13T07:10:04.247762619Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=911.667µs grafana | logger=migrator t=2025-06-13T07:10:04.25149651Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-13T07:10:04.252766264Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.269044ms grafana | logger=migrator t=2025-06-13T07:10:04.25633012Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-13T07:10:04.257232927Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=902.657µs grafana | logger=migrator t=2025-06-13T07:10:04.274173694Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-13T07:10:04.275497249Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.322074ms grafana | logger=migrator t=2025-06-13T07:10:04.28568488Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:04.28682715Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=1.14264ms grafana | logger=migrator t=2025-06-13T07:10:04.292016118Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:04.292798483Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=781.555µs grafana | logger=migrator t=2025-06-13T07:10:04.297664234Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-13T07:10:04.29856312Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=898.766µs grafana | logger=migrator t=2025-06-13T07:10:04.303473632Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-13T07:10:04.304548293Z level=info msg="Migration successfully executed" id="create star table" duration=1.07528ms grafana | logger=migrator t=2025-06-13T07:10:04.308912774Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-13T07:10:04.310200298Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.287834ms grafana | logger=migrator t=2025-06-13T07:10:04.314674192Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-13T07:10:04.316489416Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.814974ms grafana | logger=migrator t=2025-06-13T07:10:04.321580501Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-13T07:10:04.323052029Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.470907ms grafana | logger=migrator t=2025-06-13T07:10:04.326356321Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-13T07:10:04.327897819Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.544378ms grafana | logger=migrator t=2025-06-13T07:10:04.331234972Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-13T07:10:04.332024177Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=788.605µs grafana | logger=migrator t=2025-06-13T07:10:04.337590341Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-13T07:10:04.339046588Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.455617ms grafana | logger=migrator t=2025-06-13T07:10:04.344368277Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-13T07:10:04.345783393Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.416356ms grafana | logger=migrator t=2025-06-13T07:10:04.350805248Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-13T07:10:04.35203462Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.228932ms grafana | logger=migrator t=2025-06-13T07:10:04.356617687Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.357658186Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.042629ms grafana | logger=migrator t=2025-06-13T07:10:04.361747312Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.362540217Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=793.185µs grafana | logger=migrator t=2025-06-13T07:10:04.368851165Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.369914586Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.065151ms grafana | logger=migrator t=2025-06-13T07:10:04.373582934Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-13T07:10:04.373615805Z level=info msg="Migration successfully executed" id="Update org table charset" duration=31.13µs grafana | logger=migrator t=2025-06-13T07:10:04.378486396Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-13T07:10:04.378513106Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.73µs grafana | logger=migrator t=2025-06-13T07:10:04.382369129Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-13T07:10:04.382558942Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=189.463µs grafana | logger=migrator t=2025-06-13T07:10:04.385503427Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-13T07:10:04.386775031Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.276764ms grafana | logger=migrator t=2025-06-13T07:10:04.390743506Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-13T07:10:04.392239583Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.494907ms grafana | logger=migrator t=2025-06-13T07:10:04.398576862Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-13T07:10:04.40062258Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=2.064618ms grafana | logger=migrator t=2025-06-13T07:10:04.406948189Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-13T07:10:04.408062059Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.11364ms grafana | logger=migrator t=2025-06-13T07:10:04.413784547Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-13T07:10:04.415333706Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.556439ms grafana | logger=migrator t=2025-06-13T07:10:04.42253777Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-13T07:10:04.423397666Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=861.436µs grafana | logger=migrator t=2025-06-13T07:10:04.427062955Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-13T07:10:04.434377402Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.310328ms grafana | logger=migrator t=2025-06-13T07:10:04.447691281Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-13T07:10:04.449132618Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.443647ms grafana | logger=migrator t=2025-06-13T07:10:04.452838997Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-13T07:10:04.454203862Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.364115ms grafana | logger=migrator t=2025-06-13T07:10:04.459420011Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-13T07:10:04.460280367Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=859.875µs grafana | logger=migrator t=2025-06-13T07:10:04.466386111Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:04.466945471Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=559.55µs grafana | logger=migrator t=2025-06-13T07:10:04.47008907Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-13T07:10:04.471364224Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.274654ms grafana | logger=migrator t=2025-06-13T07:10:04.475296157Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T07:10:04.475314877Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=19.49µs grafana | logger=migrator t=2025-06-13T07:10:04.480436994Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T07:10:04.483366499Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.927705ms grafana | logger=migrator t=2025-06-13T07:10:04.487103088Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-13T07:10:04.489949831Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.847413ms grafana | logger=migrator t=2025-06-13T07:10:04.493463107Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.495439924Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.976567ms grafana | logger=migrator t=2025-06-13T07:10:04.498221816Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.499032121Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=810.595µs grafana | logger=migrator t=2025-06-13T07:10:04.50373068Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.505663546Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.932246ms grafana | logger=migrator t=2025-06-13T07:10:04.5085306Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.509544988Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.014138ms grafana | logger=migrator t=2025-06-13T07:10:04.516873786Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T07:10:04.518322033Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.446907ms grafana | logger=migrator t=2025-06-13T07:10:04.523486479Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-13T07:10:04.52353612Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=50.001µs grafana | logger=migrator t=2025-06-13T07:10:04.526517406Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-13T07:10:04.526556057Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=39.441µs grafana | logger=migrator t=2025-06-13T07:10:04.529442591Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.531493009Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.050118ms grafana | logger=migrator t=2025-06-13T07:10:04.536874029Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.538870947Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.996278ms grafana | logger=migrator t=2025-06-13T07:10:04.54272291Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.544786838Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.062468ms grafana | logger=migrator t=2025-06-13T07:10:04.551691308Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.553790137Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.098449ms grafana | logger=migrator t=2025-06-13T07:10:04.561980299Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.562390188Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=411.508µs grafana | logger=migrator t=2025-06-13T07:10:04.57108497Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:04.571785503Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=701.583µs grafana | logger=migrator t=2025-06-13T07:10:04.575473992Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-13T07:10:04.576057673Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=583.821µs grafana | logger=migrator t=2025-06-13T07:10:04.57804106Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-13T07:10:04.578068691Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.64µs grafana | logger=migrator t=2025-06-13T07:10:04.580785521Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-13T07:10:04.581618887Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=832.586µs grafana | logger=migrator t=2025-06-13T07:10:04.586784444Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-13T07:10:04.587499958Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=715.163µs grafana | logger=migrator t=2025-06-13T07:10:04.59033082Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:04.59722817Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.89319ms grafana | logger=migrator t=2025-06-13T07:10:04.601636242Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-13T07:10:04.60260905Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=973.758µs grafana | logger=migrator t=2025-06-13T07:10:04.605398102Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-13T07:10:04.606262098Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=863.556µs grafana | logger=migrator t=2025-06-13T07:10:04.622741987Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-13T07:10:04.623709914Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=970.187µs grafana | logger=migrator t=2025-06-13T07:10:04.629515874Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:04.629920111Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=404.147µs grafana | logger=migrator t=2025-06-13T07:10:04.632432778Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:04.63307925Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=645.752µs grafana | logger=migrator t=2025-06-13T07:10:04.637818929Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-13T07:10:04.640032961Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.214022ms grafana | logger=migrator t=2025-06-13T07:10:04.647353197Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-13T07:10:04.648444598Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.093861ms grafana | logger=migrator t=2025-06-13T07:10:04.65178611Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-13T07:10:04.651970184Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=184.164µs grafana | logger=migrator t=2025-06-13T07:10:04.656877226Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-13T07:10:04.657057639Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=180.673µs grafana | logger=migrator t=2025-06-13T07:10:04.659985353Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-13T07:10:04.660776288Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=791.105µs grafana | logger=migrator t=2025-06-13T07:10:04.66404612Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.666685299Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.637959ms grafana | logger=migrator t=2025-06-13T07:10:04.683384842Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.687283164Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.892332ms grafana | logger=migrator t=2025-06-13T07:10:04.694247865Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-13T07:10:04.695244924Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=999.179µs grafana | logger=migrator t=2025-06-13T07:10:04.700649574Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-13T07:10:04.704785672Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=4.133988ms grafana | logger=migrator t=2025-06-13T07:10:04.708658835Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-13T07:10:04.711131621Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.472125ms grafana | logger=migrator t=2025-06-13T07:10:04.714243599Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-13T07:10:04.714755488Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=517.769µs grafana | logger=migrator t=2025-06-13T07:10:04.730116436Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-13T07:10:04.732714315Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.599089ms grafana | logger=migrator t=2025-06-13T07:10:04.7426131Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-13T07:10:04.744006436Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=1.394336ms grafana | logger=migrator t=2025-06-13T07:10:04.753610326Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-13T07:10:04.754444661Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=836.105µs grafana | logger=migrator t=2025-06-13T07:10:04.762585814Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-13T07:10:04.764084052Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.499468ms grafana | logger=migrator t=2025-06-13T07:10:04.768472634Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-13T07:10:04.769433592Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=962.008µs grafana | logger=migrator t=2025-06-13T07:10:04.773449707Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-13T07:10:04.774484757Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.03531ms grafana | logger=migrator t=2025-06-13T07:10:04.781195752Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.781995366Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=801.294µs grafana | logger=migrator t=2025-06-13T07:10:04.785936421Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-13T07:10:04.787089332Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.154102ms grafana | logger=migrator t=2025-06-13T07:10:04.791353982Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-13T07:10:04.800056325Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.702083ms grafana | logger=migrator t=2025-06-13T07:10:04.807482774Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-13T07:10:04.808401771Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=918.997µs grafana | logger=migrator t=2025-06-13T07:10:04.814025416Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-13T07:10:04.81478573Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=760.394µs grafana | logger=migrator t=2025-06-13T07:10:04.817994171Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-13T07:10:04.818728284Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=733.653µs grafana | logger=migrator t=2025-06-13T07:10:04.823425663Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-13T07:10:04.823966512Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=540.71µs grafana | logger=migrator t=2025-06-13T07:10:04.827249474Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-13T07:10:04.829619869Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.369995ms grafana | logger=migrator t=2025-06-13T07:10:04.834271795Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-13T07:10:04.836917665Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.64562ms grafana | logger=migrator t=2025-06-13T07:10:04.840337289Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-13T07:10:04.840365619Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=28.98µs grafana | logger=migrator t=2025-06-13T07:10:04.843565059Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-13T07:10:04.843790294Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=224.814µs grafana | logger=migrator t=2025-06-13T07:10:04.84629788Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-13T07:10:04.848815227Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.517077ms grafana | logger=migrator t=2025-06-13T07:10:04.854953512Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-13T07:10:04.855166036Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=208.594µs grafana | logger=migrator t=2025-06-13T07:10:04.861311441Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-13T07:10:04.861497965Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=183.944µs grafana | logger=migrator t=2025-06-13T07:10:04.865053741Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-13T07:10:04.867623149Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.568618ms grafana | logger=migrator t=2025-06-13T07:10:04.872355658Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-13T07:10:04.872547472Z level=info msg="Migration successfully executed" id="Update uid value" duration=191.614µs grafana | logger=migrator t=2025-06-13T07:10:04.875816072Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:04.877432853Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.617541ms grafana | logger=migrator t=2025-06-13T07:10:04.880713684Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-13T07:10:04.881682682Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=968.518µs grafana | logger=migrator t=2025-06-13T07:10:04.884842201Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-13T07:10:04.888082902Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=3.241141ms grafana | logger=migrator t=2025-06-13T07:10:04.893380032Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-13T07:10:04.896233845Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.852613ms grafana | logger=migrator t=2025-06-13T07:10:04.899663239Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-13T07:10:04.8996834Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=21.181µs grafana | logger=migrator t=2025-06-13T07:10:04.902140715Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-13T07:10:04.90295406Z level=info msg="Migration successfully executed" id="create api_key table" duration=814.955µs grafana | logger=migrator t=2025-06-13T07:10:04.90774109Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-13T07:10:04.909261728Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.519498ms grafana | logger=migrator t=2025-06-13T07:10:04.914890993Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-13T07:10:04.915997755Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.106022ms grafana | logger=migrator t=2025-06-13T07:10:04.919106572Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-13T07:10:04.920159473Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.052521ms grafana | logger=migrator t=2025-06-13T07:10:04.925093555Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-13T07:10:04.926084873Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=990.668µs grafana | logger=migrator t=2025-06-13T07:10:04.929216752Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-13T07:10:04.930140499Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=923.427µs grafana | logger=migrator t=2025-06-13T07:10:04.933417831Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-13T07:10:04.934328837Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=911.046µs grafana | logger=migrator t=2025-06-13T07:10:04.93978882Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-13T07:10:04.947527325Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.737975ms grafana | logger=migrator t=2025-06-13T07:10:04.951145132Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-13T07:10:04.952031458Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=885.836µs grafana | logger=migrator t=2025-06-13T07:10:04.956994851Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-13T07:10:04.958879567Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.884406ms grafana | logger=migrator t=2025-06-13T07:10:04.963009064Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-13T07:10:04.963833029Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=823.045µs grafana | logger=migrator t=2025-06-13T07:10:04.970436864Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-13T07:10:04.971988372Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.549428ms grafana | logger=migrator t=2025-06-13T07:10:04.980694205Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:04.981304616Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=609.661µs grafana | logger=migrator t=2025-06-13T07:10:04.98519277Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-13T07:10:04.986212158Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.018658ms grafana | logger=migrator t=2025-06-13T07:10:04.990316855Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-13T07:10:04.990373486Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=57.911µs grafana | logger=migrator t=2025-06-13T07:10:04.997354447Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-13T07:10:05.002111106Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.758099ms grafana | logger=migrator t=2025-06-13T07:10:05.005550701Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-13T07:10:05.008319774Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.772034ms grafana | logger=migrator t=2025-06-13T07:10:05.011872691Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-13T07:10:05.012146416Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=270.585µs grafana | logger=migrator t=2025-06-13T07:10:05.015726624Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-13T07:10:05.019417586Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.690442ms grafana | logger=migrator t=2025-06-13T07:10:05.025597373Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-13T07:10:05.028608571Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.011018ms grafana | logger=migrator t=2025-06-13T07:10:05.031873312Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-13T07:10:05.032665878Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=791.906µs grafana | logger=migrator t=2025-06-13T07:10:05.035894329Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-13T07:10:05.036593173Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=698.554µs grafana | logger=migrator t=2025-06-13T07:10:05.040809903Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-13T07:10:05.04170386Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=893.617µs grafana | logger=migrator t=2025-06-13T07:10:05.045129665Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-13T07:10:05.046008153Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=878.298µs grafana | logger=migrator t=2025-06-13T07:10:05.049621981Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-13T07:10:05.050533749Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=911.567µs grafana | logger=migrator t=2025-06-13T07:10:05.055868861Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-13T07:10:05.057330638Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.457037ms grafana | logger=migrator t=2025-06-13T07:10:05.061362596Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-13T07:10:05.061392826Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=31.98µs grafana | logger=migrator t=2025-06-13T07:10:05.065027365Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-13T07:10:05.065054546Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=27.671µs grafana | logger=migrator t=2025-06-13T07:10:05.069283946Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-13T07:10:05.072238122Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.953496ms grafana | logger=migrator t=2025-06-13T07:10:05.076108037Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-13T07:10:05.079056533Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.948186ms grafana | logger=migrator t=2025-06-13T07:10:05.082358346Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-13T07:10:05.082379986Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=46.381µs grafana | logger=migrator t=2025-06-13T07:10:05.085652128Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-13T07:10:05.086446713Z level=info msg="Migration successfully executed" id="create quota table v1" duration=793.955µs grafana | logger=migrator t=2025-06-13T07:10:05.090173115Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-13T07:10:05.091136803Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=963.378µs grafana | logger=migrator t=2025-06-13T07:10:05.096394384Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-13T07:10:05.096471575Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=78.251µs grafana | logger=migrator t=2025-06-13T07:10:05.101263487Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-13T07:10:05.102144853Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=880.916µs grafana | logger=migrator t=2025-06-13T07:10:05.106542337Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-13T07:10:05.107880532Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.337635ms grafana | logger=migrator t=2025-06-13T07:10:05.111397689Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-13T07:10:05.116003648Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.606109ms grafana | logger=migrator t=2025-06-13T07:10:05.119424032Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-13T07:10:05.119449502Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.04µs grafana | logger=migrator t=2025-06-13T07:10:05.125417247Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-13T07:10:05.125770634Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=349.287µs grafana | logger=migrator t=2025-06-13T07:10:05.129631057Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-13T07:10:05.139697789Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.066242ms grafana | logger=migrator t=2025-06-13T07:10:05.14498037Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-13T07:10:05.145800855Z level=info msg="Migration successfully executed" id="create session table" duration=819.695µs grafana | logger=migrator t=2025-06-13T07:10:05.158030079Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-13T07:10:05.158166221Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=135.072µs grafana | logger=migrator t=2025-06-13T07:10:05.16178106Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-13T07:10:05.162033115Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=246.225µs grafana | logger=migrator t=2025-06-13T07:10:05.166056212Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-13T07:10:05.166776725Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=720.423µs grafana | logger=migrator t=2025-06-13T07:10:05.173727458Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-13T07:10:05.174489162Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=761.064µs grafana | logger=migrator t=2025-06-13T07:10:05.180267592Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-13T07:10:05.180305373Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=38.891µs grafana | logger=migrator t=2025-06-13T07:10:05.184510823Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-13T07:10:05.184535094Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=26.671µs grafana | logger=migrator t=2025-06-13T07:10:05.189138392Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-13T07:10:05.192830422Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.69514ms grafana | logger=migrator t=2025-06-13T07:10:05.201947776Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-13T07:10:05.204966214Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.019768ms grafana | logger=migrator t=2025-06-13T07:10:05.211974598Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-13T07:10:05.212154771Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=182.973µs grafana | logger=migrator t=2025-06-13T07:10:05.216326421Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-13T07:10:05.216463703Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=137.702µs grafana | logger=migrator t=2025-06-13T07:10:05.219305357Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-13T07:10:05.220202164Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=896.597µs grafana | logger=migrator t=2025-06-13T07:10:05.22628149Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-13T07:10:05.226325221Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=45.221µs grafana | logger=migrator t=2025-06-13T07:10:05.230730045Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-13T07:10:05.235328673Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.600308ms grafana | logger=migrator t=2025-06-13T07:10:05.244689491Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-13T07:10:05.244943256Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=256.645µs grafana | logger=migrator t=2025-06-13T07:10:05.248075246Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-13T07:10:05.252142193Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.065887ms grafana | logger=migrator t=2025-06-13T07:10:05.257492356Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-13T07:10:05.261023583Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.530437ms grafana | logger=migrator t=2025-06-13T07:10:05.264861106Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-13T07:10:05.264896036Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=25.93µs grafana | logger=migrator t=2025-06-13T07:10:05.268321272Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-13T07:10:05.269546686Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.225003ms grafana | logger=migrator t=2025-06-13T07:10:05.27610092Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-13T07:10:05.27710036Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=999.52µs grafana | logger=migrator t=2025-06-13T07:10:05.28185806Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-13T07:10:05.28291832Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.06006ms grafana | logger=migrator t=2025-06-13T07:10:05.285906037Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-13T07:10:05.286808635Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=902.468µs grafana | logger=migrator t=2025-06-13T07:10:05.293216407Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-13T07:10:05.294289047Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.07555ms grafana | logger=migrator t=2025-06-13T07:10:05.298893285Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-13T07:10:05.299768932Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=875.377µs grafana | logger=migrator t=2025-06-13T07:10:05.302810229Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-13T07:10:05.303515543Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=704.754µs grafana | logger=migrator t=2025-06-13T07:10:05.306602102Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-13T07:10:05.307709033Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.107161ms grafana | logger=migrator t=2025-06-13T07:10:05.312885442Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-13T07:10:05.313714778Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=829.376µs grafana | logger=migrator t=2025-06-13T07:10:05.316195456Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-13T07:10:05.326232677Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.036712ms grafana | logger=migrator t=2025-06-13T07:10:05.330770143Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-13T07:10:05.331448496Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=678.293µs grafana | logger=migrator t=2025-06-13T07:10:05.335872301Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-13T07:10:05.337754686Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.880965ms grafana | logger=migrator t=2025-06-13T07:10:05.342589078Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:05.343059707Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=462.569µs grafana | logger=migrator t=2025-06-13T07:10:05.351405857Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-13T07:10:05.352298203Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=891.756µs grafana | logger=migrator t=2025-06-13T07:10:05.356225519Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-13T07:10:05.357940411Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.713382ms grafana | logger=migrator t=2025-06-13T07:10:05.36257742Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-13T07:10:05.365600677Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.022897ms grafana | logger=migrator t=2025-06-13T07:10:05.368691257Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-13T07:10:05.371552351Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.861194ms grafana | logger=migrator t=2025-06-13T07:10:05.376528516Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-13T07:10:05.379916011Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.387075ms grafana | logger=migrator t=2025-06-13T07:10:05.387087597Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-13T07:10:05.393491899Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.353991ms grafana | logger=migrator t=2025-06-13T07:10:05.398366002Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-13T07:10:05.400267979Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.908727ms grafana | logger=migrator t=2025-06-13T07:10:05.404257354Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-13T07:10:05.404297295Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=41.401µs grafana | logger=migrator t=2025-06-13T07:10:05.407947345Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-13T07:10:05.407982326Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=36.311µs grafana | logger=migrator t=2025-06-13T07:10:05.413017382Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-13T07:10:05.414561061Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.545119ms grafana | logger=migrator t=2025-06-13T07:10:05.418304392Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T07:10:05.422326439Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=4.019817ms grafana | logger=migrator t=2025-06-13T07:10:05.426717392Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-13T07:10:05.428055568Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.338846ms grafana | logger=migrator t=2025-06-13T07:10:05.431582595Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-13T07:10:05.432523464Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=940.639µs grafana | logger=migrator t=2025-06-13T07:10:05.437998768Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-13T07:10:05.439048568Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.04951ms grafana | logger=migrator t=2025-06-13T07:10:05.442127546Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-13T07:10:05.445180425Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.091369ms grafana | logger=migrator t=2025-06-13T07:10:05.447816076Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-13T07:10:05.450901494Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.084168ms grafana | logger=migrator t=2025-06-13T07:10:05.457024411Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-13T07:10:05.457275426Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=251.135µs grafana | logger=migrator t=2025-06-13T07:10:05.461633198Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:05.464134397Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=2.499598ms grafana | logger=migrator t=2025-06-13T07:10:05.467598453Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-13T07:10:05.46901062Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.412147ms grafana | logger=migrator t=2025-06-13T07:10:05.474055226Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-13T07:10:05.478040772Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.984316ms grafana | logger=migrator t=2025-06-13T07:10:05.481406896Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-13T07:10:05.481431137Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=25.061µs grafana | logger=migrator t=2025-06-13T07:10:05.484883192Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-13T07:10:05.486093475Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.240294ms grafana | logger=migrator t=2025-06-13T07:10:05.49158943Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-13T07:10:05.49260039Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.01187ms grafana | logger=migrator t=2025-06-13T07:10:05.495856901Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-13T07:10:05.495937503Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=80.962µs grafana | logger=migrator t=2025-06-13T07:10:05.498467261Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-13T07:10:05.500135753Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.669602ms grafana | logger=migrator t=2025-06-13T07:10:05.513739513Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-13T07:10:05.515337363Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.5979ms grafana | logger=migrator t=2025-06-13T07:10:05.519223137Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-13T07:10:05.520073713Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=850.676µs grafana | logger=migrator t=2025-06-13T07:10:05.523094311Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-13T07:10:05.524008688Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=913.807µs grafana | logger=migrator t=2025-06-13T07:10:05.528428363Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-13T07:10:05.529421811Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=993.268µs grafana | logger=migrator t=2025-06-13T07:10:05.532867787Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-13T07:10:05.533966459Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.086681ms grafana | logger=migrator t=2025-06-13T07:10:05.53982933Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-13T07:10:05.539877651Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=49.421µs grafana | logger=migrator t=2025-06-13T07:10:05.543777106Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.54823907Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.460904ms grafana | logger=migrator t=2025-06-13T07:10:05.551800338Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-13T07:10:05.552659294Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=858.866µs grafana | logger=migrator t=2025-06-13T07:10:05.55818886Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.56604476Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=7.856569ms grafana | logger=migrator t=2025-06-13T07:10:05.571849091Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-13T07:10:05.57237048Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=521.679µs grafana | logger=migrator t=2025-06-13T07:10:05.575240306Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-13T07:10:05.57600648Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=767.454µs grafana | logger=migrator t=2025-06-13T07:10:05.579854773Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-13T07:10:05.581805721Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.924067ms grafana | logger=migrator t=2025-06-13T07:10:05.588610941Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-13T07:10:05.601563617Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.952806ms grafana | logger=migrator t=2025-06-13T07:10:05.607611983Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-13T07:10:05.609102691Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.490808ms grafana | logger=migrator t=2025-06-13T07:10:05.616230027Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-13T07:10:05.61743069Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.208843ms grafana | logger=migrator t=2025-06-13T07:10:05.62212622Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-13T07:10:05.622498507Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=371.836µs grafana | logger=migrator t=2025-06-13T07:10:05.627697785Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-13T07:10:05.628626424Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=927.699µs grafana | logger=migrator t=2025-06-13T07:10:05.631833074Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-13T07:10:05.632240482Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=406.678µs grafana | logger=migrator t=2025-06-13T07:10:05.635514445Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.640168403Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.653348ms grafana | logger=migrator t=2025-06-13T07:10:05.645299692Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.649737246Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.437284ms grafana | logger=migrator t=2025-06-13T07:10:05.652614751Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.653518038Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=902.867µs grafana | logger=migrator t=2025-06-13T07:10:05.656164288Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.657072825Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=908.097µs grafana | logger=migrator t=2025-06-13T07:10:05.662286295Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-13T07:10:05.66253049Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=244.275µs grafana | logger=migrator t=2025-06-13T07:10:05.666782151Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-13T07:10:05.6714537Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.664929ms grafana | logger=migrator t=2025-06-13T07:10:05.682072823Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-13T07:10:05.683565241Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.504229ms grafana | logger=migrator t=2025-06-13T07:10:05.689257739Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-13T07:10:05.690577135Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=1.318576ms grafana | logger=migrator t=2025-06-13T07:10:05.694766955Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-13T07:10:05.695370246Z level=info msg="Migration successfully executed" id="Move region to single row" duration=602.892µs grafana | logger=migrator t=2025-06-13T07:10:05.699716879Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.701121226Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.392916ms grafana | logger=migrator t=2025-06-13T07:10:05.704439079Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.705821616Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.383427ms grafana | logger=migrator t=2025-06-13T07:10:05.710850701Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.711761619Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=910.308µs grafana | logger=migrator t=2025-06-13T07:10:05.714806787Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.716117432Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.309855ms grafana | logger=migrator t=2025-06-13T07:10:05.721651128Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.722949293Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.297935ms grafana | logger=migrator t=2025-06-13T07:10:05.726307276Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-13T07:10:05.727753703Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.446087ms grafana | logger=migrator t=2025-06-13T07:10:05.731056987Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-13T07:10:05.731082117Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=26.49µs grafana | logger=migrator t=2025-06-13T07:10:05.737276476Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T07:10:05.737294036Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.441µs grafana | logger=migrator t=2025-06-13T07:10:05.740360474Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-13T07:10:05.740393394Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=33.81µs grafana | logger=migrator t=2025-06-13T07:10:05.744267429Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-13T07:10:05.745798478Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.530359ms grafana | logger=migrator t=2025-06-13T07:10:05.749406527Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-13T07:10:05.75062406Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.217274ms grafana | logger=migrator t=2025-06-13T07:10:05.756849009Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-13T07:10:05.757853318Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.005049ms grafana | logger=migrator t=2025-06-13T07:10:05.762356124Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-13T07:10:05.763167829Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=813.815µs grafana | logger=migrator t=2025-06-13T07:10:05.766199017Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-13T07:10:05.766440912Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=241.145µs grafana | logger=migrator t=2025-06-13T07:10:05.772077889Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-13T07:10:05.772494097Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=415.718µs grafana | logger=migrator t=2025-06-13T07:10:05.775674898Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-13T07:10:05.77577956Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.922µs grafana | logger=migrator t=2025-06-13T07:10:05.779638694Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-13T07:10:05.785566496Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.928022ms grafana | logger=migrator t=2025-06-13T07:10:05.789056203Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-13T07:10:05.78994156Z level=info msg="Migration successfully executed" id="create team table" duration=885.437µs grafana | logger=migrator t=2025-06-13T07:10:05.795448825Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-13T07:10:05.796384032Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=935.297µs grafana | logger=migrator t=2025-06-13T07:10:05.799700306Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-13T07:10:05.800679095Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=977.529µs grafana | logger=migrator t=2025-06-13T07:10:05.806321633Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-13T07:10:05.812270456Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.949153ms grafana | logger=migrator t=2025-06-13T07:10:05.817981344Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-13T07:10:05.818214519Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=233.135µs grafana | logger=migrator t=2025-06-13T07:10:05.822299197Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:05.823273466Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=974.12µs grafana | logger=migrator t=2025-06-13T07:10:05.827885483Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-13T07:10:05.837443326Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=9.553603ms grafana | logger=migrator t=2025-06-13T07:10:05.841202827Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-13T07:10:05.846782923Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=5.576476ms grafana | logger=migrator t=2025-06-13T07:10:05.85708238Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-13T07:10:05.85867393Z level=info msg="Migration successfully executed" id="create team member table" duration=1.59075ms grafana | logger=migrator t=2025-06-13T07:10:05.864606504Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-13T07:10:05.865924109Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.317785ms grafana | logger=migrator t=2025-06-13T07:10:05.869817683Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-13T07:10:05.870768662Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=950.829µs grafana | logger=migrator t=2025-06-13T07:10:05.877040891Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-13T07:10:05.878293474Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.255043ms grafana | logger=migrator t=2025-06-13T07:10:05.883992804Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-13T07:10:05.893108348Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=9.115404ms grafana | logger=migrator t=2025-06-13T07:10:05.896331469Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-13T07:10:05.901054869Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.72321ms grafana | logger=migrator t=2025-06-13T07:10:05.906318709Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-13T07:10:05.914554936Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=8.235457ms grafana | logger=migrator t=2025-06-13T07:10:05.918195345Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-13T07:10:05.919319637Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=1.125202ms grafana | logger=migrator t=2025-06-13T07:10:05.923271953Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-13T07:10:05.924127999Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=855.766µs grafana | logger=migrator t=2025-06-13T07:10:05.927220718Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-13T07:10:05.928609064Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.385626ms grafana | logger=migrator t=2025-06-13T07:10:05.933347565Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-13T07:10:05.934597449Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.248564ms grafana | logger=migrator t=2025-06-13T07:10:05.938910541Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-13T07:10:05.94045761Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.547059ms grafana | logger=migrator t=2025-06-13T07:10:05.944345285Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-13T07:10:05.945574578Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.230083ms grafana | logger=migrator t=2025-06-13T07:10:05.952711474Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-13T07:10:05.954502618Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.947947ms grafana | logger=migrator t=2025-06-13T07:10:05.957771111Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-13T07:10:05.959063015Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.292214ms grafana | logger=migrator t=2025-06-13T07:10:05.977639519Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-13T07:10:05.979598967Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.961438ms grafana | logger=migrator t=2025-06-13T07:10:05.986688922Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-13T07:10:05.987358955Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=670.413µs grafana | logger=migrator t=2025-06-13T07:10:05.997518659Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-13T07:10:05.999504837Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=1.987058ms grafana | logger=migrator t=2025-06-13T07:10:06.051296325Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-13T07:10:06.053092348Z level=info msg="Migration successfully executed" id="create tag table" duration=1.798113ms grafana | logger=migrator t=2025-06-13T07:10:06.070087332Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-13T07:10:06.07156494Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.479258ms grafana | logger=migrator t=2025-06-13T07:10:06.078004284Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-13T07:10:06.078930411Z level=info msg="Migration successfully executed" id="create login attempt table" duration=922.427µs grafana | logger=migrator t=2025-06-13T07:10:06.083274564Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-13T07:10:06.084276594Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=998.649µs grafana | logger=migrator t=2025-06-13T07:10:06.09513571Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-13T07:10:06.097785031Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=2.647941ms grafana | logger=migrator t=2025-06-13T07:10:06.104112701Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:06.118793241Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.67898ms grafana | logger=migrator t=2025-06-13T07:10:06.126013599Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-13T07:10:06.127800463Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.765064ms grafana | logger=migrator t=2025-06-13T07:10:06.136188303Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-13T07:10:06.137242023Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.05185ms grafana | logger=migrator t=2025-06-13T07:10:06.14594477Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:06.146338226Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=393.907µs grafana | logger=migrator t=2025-06-13T07:10:06.150901984Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:06.152058455Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.156151ms grafana | logger=migrator t=2025-06-13T07:10:06.155616594Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-13T07:10:06.156406819Z level=info msg="Migration successfully executed" id="create user auth table" duration=790.495µs grafana | logger=migrator t=2025-06-13T07:10:06.166747016Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-13T07:10:06.168726753Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.979808ms grafana | logger=migrator t=2025-06-13T07:10:06.17535997Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-13T07:10:06.175432882Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=75.682µs grafana | logger=migrator t=2025-06-13T07:10:06.182917935Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.188373768Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.457313ms grafana | logger=migrator t=2025-06-13T07:10:06.191870345Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.196801079Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.925424ms grafana | logger=migrator t=2025-06-13T07:10:06.201141652Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.207465193Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.323031ms grafana | logger=migrator t=2025-06-13T07:10:06.21207663Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.21573407Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.6559ms grafana | logger=migrator t=2025-06-13T07:10:06.220550672Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.22146324Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=912.658µs grafana | logger=migrator t=2025-06-13T07:10:06.229028073Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.235446087Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.410443ms grafana | logger=migrator t=2025-06-13T07:10:06.246779002Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-13T07:10:06.253429599Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=6.650277ms grafana | logger=migrator t=2025-06-13T07:10:06.258339282Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-13T07:10:06.259095597Z level=info msg="Migration successfully executed" id="create server_lock table" duration=756.105µs grafana | logger=migrator t=2025-06-13T07:10:06.265040711Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-13T07:10:06.266405907Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.365296ms grafana | logger=migrator t=2025-06-13T07:10:06.269453805Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-13T07:10:06.270705359Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.251065ms grafana | logger=migrator t=2025-06-13T07:10:06.277977067Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-13T07:10:06.279057638Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.081001ms grafana | logger=migrator t=2025-06-13T07:10:06.282354651Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-13T07:10:06.283818379Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.461478ms grafana | logger=migrator t=2025-06-13T07:10:06.287477129Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-13T07:10:06.28914233Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.659752ms grafana | logger=migrator t=2025-06-13T07:10:06.296714404Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-13T07:10:06.304764629Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.048124ms grafana | logger=migrator t=2025-06-13T07:10:06.308730533Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-13T07:10:06.309680522Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=949.879µs grafana | logger=migrator t=2025-06-13T07:10:06.312922763Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-13T07:10:06.318468099Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.544386ms grafana | logger=migrator t=2025-06-13T07:10:06.321270413Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-13T07:10:06.322137169Z level=info msg="Migration successfully executed" id="create cache_data table" duration=866.486µs grafana | logger=migrator t=2025-06-13T07:10:06.328363528Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-13T07:10:06.329890937Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.526499ms grafana | logger=migrator t=2025-06-13T07:10:06.336431652Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-13T07:10:06.33948618Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=3.047268ms grafana | logger=migrator t=2025-06-13T07:10:06.343120919Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-13T07:10:06.344099788Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=978.439µs grafana | logger=migrator t=2025-06-13T07:10:06.349998321Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T07:10:06.350051981Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=57.861µs grafana | logger=migrator t=2025-06-13T07:10:06.357957373Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-13T07:10:06.358078635Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=124.302µs grafana | logger=migrator t=2025-06-13T07:10:06.362639302Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-13T07:10:06.363761133Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.129382ms grafana | logger=migrator t=2025-06-13T07:10:06.36725175Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T07:10:06.368296239Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.044469ms grafana | logger=migrator t=2025-06-13T07:10:06.371271557Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T07:10:06.372405148Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.133551ms grafana | logger=migrator t=2025-06-13T07:10:06.376494686Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T07:10:06.376512206Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=18.67µs grafana | logger=migrator t=2025-06-13T07:10:06.379556414Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T07:10:06.380800028Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.242114ms grafana | logger=migrator t=2025-06-13T07:10:06.384866316Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T07:10:06.385756952Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=890.826µs grafana | logger=migrator t=2025-06-13T07:10:06.389934842Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-13T07:10:06.39191829Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.982588ms grafana | logger=migrator t=2025-06-13T07:10:06.395472688Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-13T07:10:06.396570909Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.097921ms grafana | logger=migrator t=2025-06-13T07:10:06.399584916Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-13T07:10:06.407706492Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=8.118726ms grafana | logger=migrator t=2025-06-13T07:10:06.412055094Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-13T07:10:06.413272048Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.216804ms grafana | logger=migrator t=2025-06-13T07:10:06.421955703Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-13T07:10:06.422070305Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=109.402µs grafana | logger=migrator t=2025-06-13T07:10:06.426485609Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-13T07:10:06.427426808Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=941.009µs grafana | logger=migrator t=2025-06-13T07:10:06.433018314Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-13T07:10:06.433753508Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=734.254µs grafana | logger=migrator t=2025-06-13T07:10:06.436648033Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-13T07:10:06.437721254Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.0704ms grafana | logger=migrator t=2025-06-13T07:10:06.442081197Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T07:10:06.442100267Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=21.87µs grafana | logger=migrator t=2025-06-13T07:10:06.445171936Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-13T07:10:06.446292487Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.119971ms grafana | logger=migrator t=2025-06-13T07:10:06.449554439Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-13T07:10:06.450737392Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.182213ms grafana | logger=migrator t=2025-06-13T07:10:06.455770628Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-13T07:10:06.456589293Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=818.805µs grafana | logger=migrator t=2025-06-13T07:10:06.459739394Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-13T07:10:06.460466348Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=727.133µs grafana | logger=migrator t=2025-06-13T07:10:06.464050046Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.470353696Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.30265ms grafana | logger=migrator t=2025-06-13T07:10:06.474912533Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.476156256Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.244183ms grafana | logger=migrator t=2025-06-13T07:10:06.479178585Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.480693683Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.514538ms grafana | logger=migrator t=2025-06-13T07:10:06.486563895Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.514193542Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.629547ms grafana | logger=migrator t=2025-06-13T07:10:06.517919593Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.545970638Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=28.050725ms grafana | logger=migrator t=2025-06-13T07:10:06.548995215Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.550366562Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.370147ms grafana | logger=migrator t=2025-06-13T07:10:06.554726676Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.556363236Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.635861ms grafana | logger=migrator t=2025-06-13T07:10:06.559951085Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-13T07:10:06.566425548Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.473733ms grafana | logger=migrator t=2025-06-13T07:10:06.570010886Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-13T07:10:06.576016461Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.004885ms grafana | logger=migrator t=2025-06-13T07:10:06.580103689Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:06.581107778Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.003619ms grafana | logger=migrator t=2025-06-13T07:10:06.584952862Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-13T07:10:06.587482199Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.529087ms grafana | logger=migrator t=2025-06-13T07:10:06.596228317Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-13T07:10:06.598139923Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.911276ms grafana | logger=migrator t=2025-06-13T07:10:06.603765441Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-13T07:10:06.604744749Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=983.078µs grafana | logger=migrator t=2025-06-13T07:10:06.607980141Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T07:10:06.607998021Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=18.69µs grafana | logger=migrator t=2025-06-13T07:10:06.611350255Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.618732886Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.379721ms grafana | logger=migrator t=2025-06-13T07:10:06.623197641Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.630560681Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.362431ms grafana | logger=migrator t=2025-06-13T07:10:06.634316393Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.641014811Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.697698ms grafana | logger=migrator t=2025-06-13T07:10:06.647895942Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-13T07:10:06.649229398Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.330146ms grafana | logger=migrator t=2025-06-13T07:10:06.653661132Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-13T07:10:06.655015958Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.355136ms grafana | logger=migrator t=2025-06-13T07:10:06.658604736Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.665295384Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.690268ms grafana | logger=migrator t=2025-06-13T07:10:06.680676668Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.688222061Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.547713ms grafana | logger=migrator t=2025-06-13T07:10:06.692642976Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-13T07:10:06.693487522Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=840.626µs grafana | logger=migrator t=2025-06-13T07:10:06.697509249Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:06.707301895Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.793797ms grafana | logger=migrator t=2025-06-13T07:10:06.711574077Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:06.718385376Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.810549ms grafana | logger=migrator t=2025-06-13T07:10:06.72172539Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:06.721916863Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=192.733µs grafana | logger=migrator t=2025-06-13T07:10:06.725634785Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:06.727584522Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.949007ms grafana | logger=migrator t=2025-06-13T07:10:06.73322047Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T07:10:06.734515823Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.294853ms grafana | logger=migrator t=2025-06-13T07:10:06.738133163Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-13T07:10:06.739882246Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.747113ms grafana | logger=migrator t=2025-06-13T07:10:06.74427309Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-13T07:10:06.744498085Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=225.925µs grafana | logger=migrator t=2025-06-13T07:10:06.750104411Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:06.75686882Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.763999ms grafana | logger=migrator t=2025-06-13T07:10:06.768535703Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:06.778176077Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.636704ms grafana | logger=migrator t=2025-06-13T07:10:06.781356197Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:06.785996786Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.643269ms grafana | logger=migrator t=2025-06-13T07:10:06.793402517Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:06.801911199Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=8.511032ms grafana | logger=migrator t=2025-06-13T07:10:06.804899166Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-13T07:10:06.813297736Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.39581ms grafana | logger=migrator t=2025-06-13T07:10:06.816668831Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:06.816686481Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=18.83µs grafana | logger=migrator t=2025-06-13T07:10:06.826176152Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-13T07:10:06.827305634Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.129471ms grafana | logger=migrator t=2025-06-13T07:10:06.83705915Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.844215626Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.157216ms grafana | logger=migrator t=2025-06-13T07:10:06.859735932Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-13T07:10:06.859764852Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=30.31µs grafana | logger=migrator t=2025-06-13T07:10:06.868680372Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.874990383Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.309651ms grafana | logger=migrator t=2025-06-13T07:10:06.879123002Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-13T07:10:06.88007882Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=955.568µs grafana | logger=migrator t=2025-06-13T07:10:06.883477315Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.890364576Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.884801ms grafana | logger=migrator t=2025-06-13T07:10:06.894832721Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-13T07:10:06.895684468Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=858.397µs grafana | logger=migrator t=2025-06-13T07:10:06.899952859Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-13T07:10:06.900984218Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.030849ms grafana | logger=migrator t=2025-06-13T07:10:06.905204209Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.91576947Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.565081ms grafana | logger=migrator t=2025-06-13T07:10:06.91997394Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-13T07:10:06.920839178Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=864.478µs grafana | logger=migrator t=2025-06-13T07:10:06.926294241Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-13T07:10:06.927363042Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.068531ms grafana | logger=migrator t=2025-06-13T07:10:06.930506121Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-13T07:10:06.931372939Z level=info msg="Migration successfully executed" id="create alert_image table" duration=866.668µs grafana | logger=migrator t=2025-06-13T07:10:06.943098861Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-13T07:10:06.944483688Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.384057ms grafana | logger=migrator t=2025-06-13T07:10:06.951494902Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-13T07:10:06.951521552Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=28.381µs grafana | logger=migrator t=2025-06-13T07:10:06.955122772Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-13T07:10:06.957271112Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.147291ms grafana | logger=migrator t=2025-06-13T07:10:06.960964313Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.961960371Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=996.258µs grafana | logger=migrator t=2025-06-13T07:10:06.967289963Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T07:10:06.967699701Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-13T07:10:06.970335991Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-13T07:10:06.970720869Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=385.018µs grafana | logger=migrator t=2025-06-13T07:10:06.974021692Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-13T07:10:06.97498501Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=961.908µs grafana | logger=migrator t=2025-06-13T07:10:06.980213279Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-13T07:10:06.990442695Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.228236ms grafana | logger=migrator t=2025-06-13T07:10:06.993961312Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-13T07:10:06.994999762Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.03912ms grafana | logger=migrator t=2025-06-13T07:10:06.998401216Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-13T07:10:06.999574319Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.173283ms grafana | logger=migrator t=2025-06-13T07:10:07.01435031Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-13T07:10:07.01641415Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=2.06467ms grafana | logger=migrator t=2025-06-13T07:10:07.020204693Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-13T07:10:07.02161978Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.414487ms grafana | logger=migrator t=2025-06-13T07:10:07.025261659Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:07.026484132Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.218452ms grafana | logger=migrator t=2025-06-13T07:10:07.031908395Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-13T07:10:07.031975236Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=67.301µs grafana | logger=migrator t=2025-06-13T07:10:07.036149766Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-13T07:10:07.036187857Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=38.621µs grafana | logger=migrator t=2025-06-13T07:10:07.039042611Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-13T07:10:07.047076065Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=8.030834ms grafana | logger=migrator t=2025-06-13T07:10:07.052508639Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-13T07:10:07.053161121Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=652.302µs grafana | logger=migrator t=2025-06-13T07:10:07.05627545Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-13T07:10:07.057433413Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.157663ms grafana | logger=migrator t=2025-06-13T07:10:07.061209374Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-13T07:10:07.061692624Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=482.6µs grafana | logger=migrator t=2025-06-13T07:10:07.065682709Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-13T07:10:07.066913463Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.230654ms grafana | logger=migrator t=2025-06-13T07:10:07.072830876Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-13T07:10:07.077692469Z level=info msg="Migration successfully executed" id="create secrets table" duration=4.859393ms grafana | logger=migrator t=2025-06-13T07:10:07.082862557Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-13T07:10:07.122246868Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.380231ms grafana | logger=migrator t=2025-06-13T07:10:07.12599682Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-13T07:10:07.131678998Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.681958ms grafana | logger=migrator t=2025-06-13T07:10:07.137367557Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-13T07:10:07.137660282Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=292.045µs grafana | logger=migrator t=2025-06-13T07:10:07.140872894Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-13T07:10:07.188640914Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.76273ms grafana | logger=migrator t=2025-06-13T07:10:07.191919297Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-13T07:10:07.224691362Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.769025ms grafana | logger=migrator t=2025-06-13T07:10:07.233793956Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-13T07:10:07.235002078Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.212022ms grafana | logger=migrator t=2025-06-13T07:10:07.238404033Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-13T07:10:07.240123817Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.719224ms grafana | logger=migrator t=2025-06-13T07:10:07.244115712Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-13T07:10:07.244578271Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=461.799µs grafana | logger=migrator t=2025-06-13T07:10:07.2497531Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-13T07:10:07.25080095Z level=info msg="Migration successfully executed" id="create permission table" duration=1.04747ms grafana | logger=migrator t=2025-06-13T07:10:07.255412597Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-13T07:10:07.25657929Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.166713ms grafana | logger=migrator t=2025-06-13T07:10:07.260163498Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-13T07:10:07.261683167Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.517059ms grafana | logger=migrator t=2025-06-13T07:10:07.265044411Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-13T07:10:07.266199214Z level=info msg="Migration successfully executed" id="create role table" duration=1.154743ms grafana | logger=migrator t=2025-06-13T07:10:07.272327731Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-13T07:10:07.28067804Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.348759ms grafana | logger=migrator t=2025-06-13T07:10:07.283975182Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-13T07:10:07.292045127Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.066935ms grafana | logger=migrator t=2025-06-13T07:10:07.304462824Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-13T07:10:07.305904631Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.443327ms grafana | logger=migrator t=2025-06-13T07:10:07.309423788Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-13T07:10:07.31062086Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.196832ms grafana | logger=migrator t=2025-06-13T07:10:07.314182758Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:07.315390822Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.208164ms grafana | logger=migrator t=2025-06-13T07:10:07.319529181Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-13T07:10:07.320575931Z level=info msg="Migration successfully executed" id="create team role table" duration=1.04654ms grafana | logger=migrator t=2025-06-13T07:10:07.325130797Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-13T07:10:07.326773108Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.641241ms grafana | logger=migrator t=2025-06-13T07:10:07.33154632Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-13T07:10:07.333605089Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.058589ms grafana | logger=migrator t=2025-06-13T07:10:07.337963722Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-13T07:10:07.339249227Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.285875ms grafana | logger=migrator t=2025-06-13T07:10:07.342470678Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-13T07:10:07.343460677Z level=info msg="Migration successfully executed" id="create user role table" duration=990.049µs grafana | logger=migrator t=2025-06-13T07:10:07.347080206Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-13T07:10:07.34834703Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.266284ms grafana | logger=migrator t=2025-06-13T07:10:07.353857125Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-13T07:10:07.355307923Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.454408ms grafana | logger=migrator t=2025-06-13T07:10:07.361267517Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-13T07:10:07.362886878Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.619641ms grafana | logger=migrator t=2025-06-13T07:10:07.366180991Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-13T07:10:07.367095138Z level=info msg="Migration successfully executed" id="create builtin role table" duration=913.957µs grafana | logger=migrator t=2025-06-13T07:10:07.37192521Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-13T07:10:07.373130073Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.209193ms grafana | logger=migrator t=2025-06-13T07:10:07.376570109Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-13T07:10:07.377636179Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.06638ms grafana | logger=migrator t=2025-06-13T07:10:07.381104555Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-13T07:10:07.391098935Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.99028ms grafana | logger=migrator t=2025-06-13T07:10:07.394886728Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-13T07:10:07.396308965Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.423547ms grafana | logger=migrator t=2025-06-13T07:10:07.400378413Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-13T07:10:07.401525834Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.147601ms grafana | logger=migrator t=2025-06-13T07:10:07.408042389Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:07.409786402Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.743934ms grafana | logger=migrator t=2025-06-13T07:10:07.415796097Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-13T07:10:07.416891977Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.09553ms grafana | logger=migrator t=2025-06-13T07:10:07.421464315Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-13T07:10:07.422798411Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.332946ms grafana | logger=migrator t=2025-06-13T07:10:07.427627572Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-13T07:10:07.429358015Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.730613ms grafana | logger=migrator t=2025-06-13T07:10:07.436329378Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-13T07:10:07.445921512Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.592583ms grafana | logger=migrator t=2025-06-13T07:10:07.451715252Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-13T07:10:07.457818598Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.101816ms grafana | logger=migrator t=2025-06-13T07:10:07.462932586Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-13T07:10:07.472521569Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=9.585583ms grafana | logger=migrator t=2025-06-13T07:10:07.477230988Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-13T07:10:07.486553356Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.317638ms grafana | logger=migrator t=2025-06-13T07:10:07.494668461Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-13T07:10:07.496028417Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.362226ms grafana | logger=migrator t=2025-06-13T07:10:07.503319346Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-13T07:10:07.504573299Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.254703ms grafana | logger=migrator t=2025-06-13T07:10:07.514815435Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-13T07:10:07.521287218Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=6.472013ms grafana | logger=migrator t=2025-06-13T07:10:07.529589347Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-13T07:10:07.537714732Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.128175ms grafana | logger=migrator t=2025-06-13T07:10:07.541941783Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-13T07:10:07.542795559Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=854.016µs grafana | logger=migrator t=2025-06-13T07:10:07.548772853Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-13T07:10:07.549548798Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=776.095µs grafana | logger=migrator t=2025-06-13T07:10:07.554481822Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-13T07:10:07.556180834Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.698152ms grafana | logger=migrator t=2025-06-13T07:10:07.566676494Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-13T07:10:07.568274164Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.57771ms grafana | logger=migrator t=2025-06-13T07:10:07.573233579Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-13T07:10:07.573253149Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=20.61µs grafana | logger=migrator t=2025-06-13T07:10:07.577438499Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-13T07:10:07.578283326Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=845.257µs grafana | logger=migrator t=2025-06-13T07:10:07.582936264Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-13T07:10:07.582995825Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=60.021µs grafana | logger=migrator t=2025-06-13T07:10:07.588955459Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-13T07:10:07.589411828Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=457.159µs grafana | logger=migrator t=2025-06-13T07:10:07.593507435Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-13T07:10:07.594131527Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=625.032µs grafana | logger=migrator t=2025-06-13T07:10:07.598065223Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-13T07:10:07.598775866Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=711.373µs grafana | logger=migrator t=2025-06-13T07:10:07.6037074Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-13T07:10:07.603909984Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=202.964µs grafana | logger=migrator t=2025-06-13T07:10:07.608499712Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-13T07:10:07.608999402Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=500.28µs grafana | logger=migrator t=2025-06-13T07:10:07.612555159Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-13T07:10:07.613262193Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=706.644µs grafana | logger=migrator t=2025-06-13T07:10:07.617206277Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-13T07:10:07.618204117Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=998.75µs grafana | logger=migrator t=2025-06-13T07:10:07.623229893Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-13T07:10:07.629552883Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.32197ms grafana | logger=migrator t=2025-06-13T07:10:07.635893164Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-13T07:10:07.635923975Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=32.871µs grafana | logger=migrator t=2025-06-13T07:10:07.649216628Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-13T07:10:07.650501573Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.285805ms grafana | logger=migrator t=2025-06-13T07:10:07.656924095Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-13T07:10:07.657795922Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=872.067µs grafana | logger=migrator t=2025-06-13T07:10:07.664850076Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-13T07:10:07.667548028Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.697222ms grafana | logger=migrator t=2025-06-13T07:10:07.671766008Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-13T07:10:07.684491281Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.715092ms grafana | logger=migrator t=2025-06-13T07:10:07.690891753Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.692975372Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.083329ms grafana | logger=migrator t=2025-06-13T07:10:07.697507049Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.699862964Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.355765ms grafana | logger=migrator t=2025-06-13T07:10:07.704631485Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:07.731723242Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=27.090737ms grafana | logger=migrator t=2025-06-13T07:10:07.734997445Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-13T07:10:07.73579111Z level=info msg="Migration successfully executed" id="create correlation v2" duration=789.025µs grafana | logger=migrator t=2025-06-13T07:10:07.740722234Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.7416094Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=886.976µs grafana | logger=migrator t=2025-06-13T07:10:07.744508345Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.745630207Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.121982ms grafana | logger=migrator t=2025-06-13T07:10:07.748849418Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-13T07:10:07.74996271Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.113072ms grafana | logger=migrator t=2025-06-13T07:10:07.752819714Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:07.753045708Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=226.064µs grafana | logger=migrator t=2025-06-13T07:10:07.757398362Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:07.758216147Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=817.225µs grafana | logger=migrator t=2025-06-13T07:10:07.761144723Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-13T07:10:07.770520481Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.371288ms grafana | logger=migrator t=2025-06-13T07:10:07.773844545Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-13T07:10:07.780082584Z level=info msg="Migration successfully executed" id="add type column" duration=6.238759ms grafana | logger=migrator t=2025-06-13T07:10:07.78513637Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-13T07:10:07.786858384Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.723364ms grafana | logger=migrator t=2025-06-13T07:10:07.797357134Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-13T07:10:07.799131047Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.779493ms grafana | logger=migrator t=2025-06-13T07:10:07.805492228Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.806414947Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.810183938Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.810685488Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.819861493Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-13T07:10:07.821257039Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.395436ms grafana | logger=migrator t=2025-06-13T07:10:07.826885057Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-13T07:10:07.828327074Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.436137ms grafana | logger=migrator t=2025-06-13T07:10:07.831537365Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.832936932Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.399557ms grafana | logger=migrator t=2025-06-13T07:10:07.838818674Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-13T07:10:07.840894204Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.07476ms grafana | logger=migrator t=2025-06-13T07:10:07.846379899Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.84751348Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.133621ms grafana | logger=migrator t=2025-06-13T07:10:07.86221609Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.864031225Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.807165ms grafana | logger=migrator t=2025-06-13T07:10:07.870562189Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-13T07:10:07.871981376Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.418377ms grafana | logger=migrator t=2025-06-13T07:10:07.875225259Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-13T07:10:07.876571944Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.346015ms grafana | logger=migrator t=2025-06-13T07:10:07.880398137Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.881716062Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.317865ms grafana | logger=migrator t=2025-06-13T07:10:07.88526105Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:07.886551214Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.289944ms grafana | logger=migrator t=2025-06-13T07:10:07.890581002Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-13T07:10:07.893028878Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.447047ms grafana | logger=migrator t=2025-06-13T07:10:07.897534904Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-13T07:10:07.923334156Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.794492ms grafana | logger=migrator t=2025-06-13T07:10:07.928306461Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-13T07:10:07.937637809Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.330828ms grafana | logger=migrator t=2025-06-13T07:10:07.941644985Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-13T07:10:07.949005405Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.36086ms grafana | logger=migrator t=2025-06-13T07:10:07.955825736Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-13T07:10:07.95606561Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=240.034µs grafana | logger=migrator t=2025-06-13T07:10:07.960898452Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-13T07:10:07.969849963Z level=info msg="Migration successfully executed" id="add share column" duration=8.951141ms grafana | logger=migrator t=2025-06-13T07:10:07.973299959Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-13T07:10:07.973527043Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=231.194µs grafana | logger=migrator t=2025-06-13T07:10:07.978248303Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-13T07:10:07.982751529Z level=info msg="Migration successfully executed" id="create file table" duration=4.499036ms grafana | logger=migrator t=2025-06-13T07:10:07.995859679Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-13T07:10:07.997155934Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.297435ms grafana | logger=migrator t=2025-06-13T07:10:08.000226592Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-13T07:10:08.001427175Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.200123ms grafana | logger=migrator t=2025-06-13T07:10:08.004429462Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-13T07:10:08.005271069Z level=info msg="Migration successfully executed" id="create file_meta table" duration=840.997µs grafana | logger=migrator t=2025-06-13T07:10:08.009175473Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-13T07:10:08.010355356Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.179103ms grafana | logger=migrator t=2025-06-13T07:10:08.013765471Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-13T07:10:08.013787671Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=23.22µs grafana | logger=migrator t=2025-06-13T07:10:08.016588715Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-13T07:10:08.016606335Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=17.75µs grafana | logger=migrator t=2025-06-13T07:10:08.019394038Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-13T07:10:08.01999397Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=599.512µs grafana | logger=migrator t=2025-06-13T07:10:08.02371015Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-13T07:10:08.023925244Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=214.854µs grafana | logger=migrator t=2025-06-13T07:10:08.026229199Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-13T07:10:08.027642816Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.412447ms grafana | logger=migrator t=2025-06-13T07:10:08.03049345Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-13T07:10:08.039826078Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.330918ms grafana | logger=migrator t=2025-06-13T07:10:08.044233781Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-13T07:10:08.044361104Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=126.843µs grafana | logger=migrator t=2025-06-13T07:10:08.04833554Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-13T07:10:08.049414601Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.078231ms grafana | logger=migrator t=2025-06-13T07:10:08.052875207Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-13T07:10:08.053305384Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=431.027µs grafana | logger=migrator t=2025-06-13T07:10:08.056280101Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-13T07:10:08.056641668Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=361.497µs grafana | logger=migrator t=2025-06-13T07:10:08.060585894Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-13T07:10:08.061074832Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=488.878µs grafana | logger=migrator t=2025-06-13T07:10:08.063886456Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-13T07:10:08.073116462Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.228916ms grafana | logger=migrator t=2025-06-13T07:10:08.076293623Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-13T07:10:08.086943066Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.650463ms grafana | logger=migrator t=2025-06-13T07:10:08.089660478Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-13T07:10:08.090479553Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=818.475µs grafana | logger=migrator t=2025-06-13T07:10:08.094367508Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-13T07:10:08.17369453Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=79.320532ms grafana | logger=migrator t=2025-06-13T07:10:08.177006914Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-13T07:10:08.179099824Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.08878ms grafana | logger=migrator t=2025-06-13T07:10:08.182790234Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-13T07:10:08.183922606Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.132122ms grafana | logger=migrator t=2025-06-13T07:10:08.186897882Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-13T07:10:08.222030962Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=35.13127ms grafana | logger=migrator t=2025-06-13T07:10:08.226987217Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-13T07:10:08.23400136Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.015273ms grafana | logger=migrator t=2025-06-13T07:10:08.236959318Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-13T07:10:08.237307854Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=346.166µs grafana | logger=migrator t=2025-06-13T07:10:08.240047566Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-13T07:10:08.240228539Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=180.443µs grafana | logger=migrator t=2025-06-13T07:10:08.243094254Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-13T07:10:08.243341639Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=245.825µs grafana | logger=migrator t=2025-06-13T07:10:08.246339696Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-13T07:10:08.24653549Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=195.183µs grafana | logger=migrator t=2025-06-13T07:10:08.249352753Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-13T07:10:08.249555438Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=202.305µs grafana | logger=migrator t=2025-06-13T07:10:08.252582745Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-13T07:10:08.255908879Z level=info msg="Migration successfully executed" id="create folder table" duration=3.237531ms grafana | logger=migrator t=2025-06-13T07:10:08.260321773Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-13T07:10:08.262388062Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.065729ms grafana | logger=migrator t=2025-06-13T07:10:08.26645703Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-13T07:10:08.267664242Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.204352ms grafana | logger=migrator t=2025-06-13T07:10:08.270557128Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-13T07:10:08.270587819Z level=info msg="Migration successfully executed" id="Update folder title length" duration=28.511µs grafana | logger=migrator t=2025-06-13T07:10:08.273690338Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T07:10:08.27486991Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.175792ms grafana | logger=migrator t=2025-06-13T07:10:08.281181651Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-13T07:10:08.283098757Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.915015ms grafana | logger=migrator t=2025-06-13T07:10:08.286702916Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-13T07:10:08.288906558Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.202832ms grafana | logger=migrator t=2025-06-13T07:10:08.291998557Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-13T07:10:08.292435725Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=438.178µs grafana | logger=migrator t=2025-06-13T07:10:08.29636793Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-13T07:10:08.296648436Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=280.266µs grafana | logger=migrator t=2025-06-13T07:10:08.299860616Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-13T07:10:08.301736772Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.876086ms grafana | logger=migrator t=2025-06-13T07:10:08.305232259Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-13T07:10:08.307425301Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.191792ms grafana | logger=migrator t=2025-06-13T07:10:08.312183762Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T07:10:08.313790792Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.6075ms grafana | logger=migrator t=2025-06-13T07:10:08.320132003Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T07:10:08.32206057Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.931977ms grafana | logger=migrator t=2025-06-13T07:10:08.325933334Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-13T07:10:08.327046385Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.112601ms grafana | logger=migrator t=2025-06-13T07:10:08.330848028Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-13T07:10:08.332642762Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.794894ms grafana | logger=migrator t=2025-06-13T07:10:08.33675594Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-13T07:10:08.338154117Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.397847ms grafana | logger=migrator t=2025-06-13T07:10:08.346595418Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-13T07:10:08.3477629Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.163742ms grafana | logger=migrator t=2025-06-13T07:10:08.353708593Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-13T07:10:08.35560195Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.891797ms grafana | logger=migrator t=2025-06-13T07:10:08.359722158Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-13T07:10:08.360598995Z level=info msg="Migration successfully executed" id="create signing_key table" duration=875.227µs grafana | logger=migrator t=2025-06-13T07:10:08.36557106Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-13T07:10:08.367400345Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.831126ms grafana | logger=migrator t=2025-06-13T07:10:08.371706497Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-13T07:10:08.37290719Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.203893ms grafana | logger=migrator t=2025-06-13T07:10:08.377169631Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-13T07:10:08.377448276Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=279.355µs grafana | logger=migrator t=2025-06-13T07:10:08.381460103Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-13T07:10:08.391457763Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.99708ms grafana | logger=migrator t=2025-06-13T07:10:08.397652522Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-13T07:10:08.398915536Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.264544ms grafana | logger=migrator t=2025-06-13T07:10:08.406253555Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T07:10:08.406334207Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=84.252µs grafana | logger=migrator t=2025-06-13T07:10:08.409797713Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T07:10:08.411145819Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.348036ms grafana | logger=migrator t=2025-06-13T07:10:08.414707467Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-13T07:10:08.414735897Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=29.801µs grafana | logger=migrator t=2025-06-13T07:10:08.418596471Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T07:10:08.420517568Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.920287ms grafana | logger=migrator t=2025-06-13T07:10:08.423601276Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-13T07:10:08.424784669Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.183013ms grafana | logger=migrator t=2025-06-13T07:10:08.428593802Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-13T07:10:08.429755074Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.158813ms grafana | logger=migrator t=2025-06-13T07:10:08.433365883Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-13T07:10:08.434533795Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.157112ms grafana | logger=migrator t=2025-06-13T07:10:08.440334196Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-13T07:10:08.441185392Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=851.956µs grafana | logger=migrator t=2025-06-13T07:10:08.445740849Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-13T07:10:08.446040284Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=299.625µs grafana | logger=migrator t=2025-06-13T07:10:08.448705275Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-13T07:10:08.449355858Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=648.413µs grafana | logger=migrator t=2025-06-13T07:10:08.452345455Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-13T07:10:08.453308044Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=962.068µs grafana | logger=migrator t=2025-06-13T07:10:08.457069295Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-13T07:10:08.458061643Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=992.098µs grafana | logger=migrator t=2025-06-13T07:10:08.461261715Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-13T07:10:08.471549821Z level=info msg="Migration successfully executed" id="add stack_id column" duration=10.288426ms grafana | logger=migrator t=2025-06-13T07:10:08.4746425Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-13T07:10:08.483075511Z level=info msg="Migration successfully executed" id="add region_slug column" duration=8.432311ms grafana | logger=migrator t=2025-06-13T07:10:08.487406533Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-13T07:10:08.497141309Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=9.732996ms grafana | logger=migrator t=2025-06-13T07:10:08.5003392Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-13T07:10:08.509488305Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.149175ms grafana | logger=migrator t=2025-06-13T07:10:08.51605277Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-13T07:10:08.516280944Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=227.244µs grafana | logger=migrator t=2025-06-13T07:10:08.519867563Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-13T07:10:08.521143536Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.275813ms grafana | logger=migrator t=2025-06-13T07:10:08.525306836Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-13T07:10:08.537869735Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=12.563709ms grafana | logger=migrator t=2025-06-13T07:10:08.541424613Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-13T07:10:08.541553706Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=128.823µs grafana | logger=migrator t=2025-06-13T07:10:08.544547373Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-13T07:10:08.545810988Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.263284ms grafana | logger=migrator t=2025-06-13T07:10:08.551344033Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:08.577769636Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=26.424883ms grafana | logger=migrator t=2025-06-13T07:10:08.580743564Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-13T07:10:08.581550219Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=806.655µs grafana | logger=migrator t=2025-06-13T07:10:08.585742579Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:08.586780388Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.036999ms grafana | logger=migrator t=2025-06-13T07:10:08.590598172Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:08.59104112Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=442.297µs grafana | logger=migrator t=2025-06-13T07:10:08.595844251Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:08.596827761Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=985.12µs grafana | logger=migrator t=2025-06-13T07:10:08.599884899Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-13T07:10:08.628514575Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=28.626747ms grafana | logger=migrator t=2025-06-13T07:10:08.632751475Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-13T07:10:08.633774894Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.023519ms grafana | logger=migrator t=2025-06-13T07:10:08.636760492Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-13T07:10:08.637982445Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.222083ms grafana | logger=migrator t=2025-06-13T07:10:08.641628395Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-13T07:10:08.641975311Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=347.066µs grafana | logger=migrator t=2025-06-13T07:10:08.645867256Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-13T07:10:08.646812363Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=947.307µs grafana | logger=migrator t=2025-06-13T07:10:08.649913032Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-13T07:10:08.660071766Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.155824ms grafana | logger=migrator t=2025-06-13T07:10:08.66446527Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-13T07:10:08.671866031Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.398921ms grafana | logger=migrator t=2025-06-13T07:10:08.675659383Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-13T07:10:08.685893489Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.233146ms grafana | logger=migrator t=2025-06-13T07:10:08.689522388Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-13T07:10:08.699369476Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=9.847178ms grafana | logger=migrator t=2025-06-13T07:10:08.70326631Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-13T07:10:08.717104334Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=13.836174ms grafana | logger=migrator t=2025-06-13T07:10:08.721648301Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-13T07:10:08.729374998Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=7.726137ms grafana | logger=migrator t=2025-06-13T07:10:08.732978137Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-13T07:10:08.733988955Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.010008ms grafana | logger=migrator t=2025-06-13T07:10:08.736951393Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-13T07:10:08.772896568Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.945055ms grafana | logger=migrator t=2025-06-13T07:10:08.777263191Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-13T07:10:08.784670923Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.407572ms grafana | logger=migrator t=2025-06-13T07:10:08.787610909Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-13T07:10:08.797364495Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.753307ms grafana | logger=migrator t=2025-06-13T07:10:08.800572336Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-13T07:10:08.807772143Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=7.199137ms grafana | logger=migrator t=2025-06-13T07:10:08.811952433Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-13T07:10:08.82333059Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=11.362846ms grafana | logger=migrator t=2025-06-13T07:10:08.82648988Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-13T07:10:08.826516191Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=28.721µs grafana | logger=migrator t=2025-06-13T07:10:08.829413856Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-13T07:10:08.829431286Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=15.39µs grafana | logger=migrator t=2025-06-13T07:10:08.834506133Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:08.842945244Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=8.436791ms grafana | logger=migrator t=2025-06-13T07:10:08.846016782Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:08.856082775Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.064562ms grafana | logger=migrator t=2025-06-13T07:10:08.867310919Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-13T07:10:08.867766438Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=455.249µs grafana | logger=migrator t=2025-06-13T07:10:08.87105508Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-13T07:10:08.871453148Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=397.358µs grafana | logger=migrator t=2025-06-13T07:10:08.876823539Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:08.888495272Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=11.670413ms grafana | logger=migrator t=2025-06-13T07:10:08.893696452Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:08.901072312Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=7.37549ms grafana | logger=migrator t=2025-06-13T07:10:08.905049788Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T07:10:08.916893074Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=11.840756ms grafana | logger=migrator t=2025-06-13T07:10:08.923758656Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-13T07:10:08.934600532Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=10.840617ms grafana | logger=migrator t=2025-06-13T07:10:08.937480227Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-13T07:10:08.937968116Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=486.569µs grafana | logger=migrator t=2025-06-13T07:10:08.941569965Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:08.950551286Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=8.93754ms grafana | logger=migrator t=2025-06-13T07:10:08.960562257Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:08.968005889Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=7.446832ms grafana | logger=migrator t=2025-06-13T07:10:08.972916463Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-13T07:10:08.973274059Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=360.586µs grafana | logger=migrator t=2025-06-13T07:10:08.976643814Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-13T07:10:08.977281076Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=636.632µs grafana | logger=migrator t=2025-06-13T07:10:08.980220322Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-13T07:10:08.981435185Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.214523ms grafana | logger=migrator t=2025-06-13T07:10:08.985756677Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-13T07:10:08.985778378Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=20.701µs grafana | logger=migrator t=2025-06-13T07:10:08.988654692Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-13T07:10:08.988672183Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=20.341µs grafana | logger=migrator t=2025-06-13T07:10:08.991678881Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-13T07:10:08.992127399Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=447.998µs grafana | logger=migrator t=2025-06-13T07:10:08.997458621Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:09.010288195Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.831904ms grafana | logger=migrator t=2025-06-13T07:10:09.01366019Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:09.021708803Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=8.043073ms grafana | logger=migrator t=2025-06-13T07:10:09.028774618Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-13T07:10:09.033540468Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=4.76434ms grafana | logger=migrator t=2025-06-13T07:10:09.041082512Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-13T07:10:09.042480589Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.397427ms grafana | logger=migrator t=2025-06-13T07:10:09.046504446Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-13T07:10:09.057910523Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.398167ms grafana | logger=migrator t=2025-06-13T07:10:09.062376129Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:09.075206693Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=12.829674ms grafana | logger=migrator t=2025-06-13T07:10:09.080647317Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-13T07:10:09.080669908Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-13T07:10:09.080900092Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-13T07:10:09.080915162Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=268.625µs grafana | logger=migrator t=2025-06-13T07:10:09.085527931Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-13T07:10:09.086164583Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=637.312µs grafana | logger=migrator t=2025-06-13T07:10:09.089576307Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-13T07:10:09.091329811Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.752284ms grafana | logger=migrator t=2025-06-13T07:10:09.09913803Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-13T07:10:09.100389524Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.251734ms grafana | logger=migrator t=2025-06-13T07:10:09.104829008Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-13T07:10:09.106095682Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.269424ms grafana | logger=migrator t=2025-06-13T07:10:09.109495497Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-13T07:10:09.110747012Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.251574ms grafana | logger=migrator t=2025-06-13T07:10:09.115199016Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:09.12588357Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.665833ms grafana | logger=migrator t=2025-06-13T07:10:09.131323554Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:09.142420885Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=11.055091ms grafana | logger=migrator t=2025-06-13T07:10:09.146636236Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-13T07:10:09.154083738Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=7.446232ms grafana | logger=migrator t=2025-06-13T07:10:09.159034702Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-13T07:10:09.169050203Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.014681ms grafana | logger=migrator t=2025-06-13T07:10:09.172498249Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-13T07:10:09.172663912Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-13T07:10:09.172672512Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=174.783µs grafana | logger=migrator t=2025-06-13T07:10:09.177179269Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-13T07:10:09.178127326Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=945.247µs grafana | logger=migrator t=2025-06-13T07:10:09.181573382Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.464635475s grafana | logger=migrator t=2025-06-13T07:10:09.182620852Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-13T07:10:09.205525079Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-13T07:10:09.205941867Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-13T07:10:09.209928292Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T07:10:09.303086799Z level=info msg="Restored cache from database" duration=396.827µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.311989539Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-13T07:10:09.312009569Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-13T07:10:09.319531823Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-13T07:10:09.320252587Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=720.724µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.32673937Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-13T07:10:09.32675608Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=16.89µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.330528543Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-13T07:10:09.331182935Z level=info msg="Migration successfully executed" id="drop table resource" duration=651.962µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.336516667Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-13T07:10:09.338471834Z level=info msg="Migration successfully executed" id="create table resource" duration=1.965758ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.342774346Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-13T07:10:09.34400849Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.233884ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.347836332Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.347919904Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=83.692µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.350482093Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.351588745Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.106132ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.356678981Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-13T07:10:09.358539767Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.863606ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.362595284Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-13T07:10:09.363864818Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.269374ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.367315134Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-13T07:10:09.367547198Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=231.474µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.371939863Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-13T07:10:09.372921821Z level=info msg="Migration successfully executed" id="create table resource_version" duration=964.828µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.376378037Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-13T07:10:09.378947976Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=2.568839ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.386695334Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-13T07:10:09.38701235Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=317.256µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.390483336Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-13T07:10:09.391879582Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.395466ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.396712315Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-13T07:10:09.398203523Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.494798ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.402767271Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-13T07:10:09.404170157Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.402007ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.410474568Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.422400045Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.922238ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.427715686Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-13T07:10:09.4383935Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=10.678264ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.443763992Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-13T07:10:09.444903484Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.140682ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.453545898Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-13T07:10:09.455298942Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.753844ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.458952722Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.470234497Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=11.280965ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.473641752Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-13T07:10:09.484385246Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=10.742564ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.487862413Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-13T07:10:09.487924664Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-13T07:10:09.488371003Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=508.31µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.492777377Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-13T07:10:09.493764206Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=988.19µs grafana | logger=resource-migrator t=2025-06-13T07:10:09.497220322Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.513556453Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=16.335791ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.518003488Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-13T07:10:09.519263022Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.259784ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.523598374Z level=info msg="migrations completed" performed=26 skipped=0 duration=204.108722ms grafana | logger=resource-migrator t=2025-06-13T07:10:09.524140555Z level=info msg="Unlocking database" grafana | t=2025-06-13T07:10:09.524431021Z level=info caller=logger.go:214 time=2025-06-13T07:10:09.524414051Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-13T07:10:09.535178506Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-13T07:10:09.572204222Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-13T07:10:09.572304824Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-13T07:10:09.572415046Z level=info msg="Plugins loaded" count=53 duration=37.23721ms grafana | logger=query_data t=2025-06-13T07:10:09.577828299Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-13T07:10:09.581632181Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T07:10:09.59624378Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-13T07:10:09.602734043Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-13T07:10:09.602749474Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-13T07:10:09.606526416Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-13T07:10:09.608418262Z level=info msg="Storage starting" grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:09.608554205Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=ngalert.state.manager t=2025-06-13T07:10:09.610022653Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T07:10:09.614958957Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2025-06-13T07:10:09.621019742Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugins.update.checker t=2025-06-13T07:10:09.704194269Z level=info msg="Update check succeeded" duration=89.228721ms grafana | logger=provisioning.datasources t=2025-06-13T07:10:09.742342447Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=ngalert.state.manager t=2025-06-13T07:10:09.749648085Z level=info msg="State cache has been initialized" states=0 duration=139.625172ms grafana | logger=ngalert.scheduler t=2025-06-13T07:10:09.749686326Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-13T07:10:09.749920101Z level=info msg=starting first_tick=2025-06-13T07:10:10Z grafana | logger=sqlstore.transactions t=2025-06-13T07:10:09.753335005Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.76143142Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=sqlstore.transactions t=2025-06-13T07:10:09.763862187Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=provisioning.alerting t=2025-06-13T07:10:09.763945479Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-13T07:10:09.763966699Z level=info msg="finished to provision alerting" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.765409467Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.766239552Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.766983726Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.76771638Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.769029755Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T07:10:09.770748348Z level=info msg="Patterns update finished" duration=161.49358ms grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.771456391Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=sqlstore.transactions t=2025-06-13T07:10:09.775787705Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=provisioning.dashboard t=2025-06-13T07:10:09.777175321Z level=info msg="starting to provision dashboards" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.77768208Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-13T07:10:09.779464454Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana.update.checker t=2025-06-13T07:10:09.828313506Z level=info msg="Update check succeeded" duration=213.615885ms grafana | logger=app-registry t=2025-06-13T07:10:09.834340391Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-13T07:10:10.020183515Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-13T07:10:10.113014385Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-13T07:10:10.143082659Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.143111349Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=534.536474ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.14313267Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-13T07:10:10.308066895Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=provisioning.dashboard t=2025-06-13T07:10:10.370196001Z level=info msg="finished to provision dashboards" grafana | logger=installer.fs t=2025-06-13T07:10:10.376774106Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-13T07:10:10.396228557Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.396267158Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=253.130327ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.396287308Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=plugin.installer t=2025-06-13T07:10:10.656925858Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-13T07:10:10.800856714Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-13T07:10:10.824714048Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.82477989Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=428.488432ms grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:10.824851451Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-13T07:10:10.995916313Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-13T07:10:11.051927961Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-13T07:10:11.069220211Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-13T07:10:11.069242002Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=244.34064ms grafana | logger=infra.usagestats t=2025-06-13T07:11:42.624781349Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-13 07:10:09,028] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,029] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,029] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,029] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,030] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,030] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,030] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,030] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,030] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,031] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,034] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,038] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 07:10:09,042] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 07:10:09,049] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:09,069] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:09,069] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:09,078] INFO Socket connection established, initiating session, client: /172.17.0.8:51252, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:09,108] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000022f050000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:09,229] INFO Session: 0x10000022f050000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:09,229] INFO EventThread shut down for session: 0x10000022f050000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-13 07:10:09,965] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-13 07:10:10,275] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-13 07:10:10,351] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-13 07:10:10,352] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-13 07:10:10,353] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-13 07:10:10,367] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 07:10:10,371] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,371] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,373] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-13 07:10:10,376] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-13 07:10:10,382] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:10,385] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 07:10:10,386] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:10,394] INFO Socket connection established, initiating session, client: /172.17.0.8:51254, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:10,401] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000022f050001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-13 07:10:10,405] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-13 07:10:10,714] INFO Cluster ID = NTRvZRYCTeeyE7gtQqKPJg (kafka.server.KafkaServer) kafka | [2025-06-13 07:10:10,718] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-13 07:10:10,765] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-13 07:10:10,798] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 07:10:10,798] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 07:10:10,799] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 07:10:10,801] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-13 07:10:10,835] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-13 07:10:10,838] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-13 07:10:10,851] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2025-06-13 07:10:10,851] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-13 07:10:10,853] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-13 07:10:10,864] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-13 07:10:10,917] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-13 07:10:10,931] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-13 07:10:10,948] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 07:10:10,993] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 07:10:11,379] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 07:10:11,382] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 07:10:11,405] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-13 07:10:11,405] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-13 07:10:11,406] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-13 07:10:11,410] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-13 07:10:11,414] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 07:10:11,430] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,434] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,436] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,438] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,452] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-13 07:10:11,476] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-13 07:10:11,496] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749798611488,1749798611488,1,0,0,72057603416719361,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-13 07:10:11,497] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 07:10:11,562] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-13 07:10:11,568] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,573] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,574] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,586] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-13 07:10:11,586] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:11,591] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:11,594] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,598] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,603] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-13 07:10:11,607] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 07:10:11,610] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-13 07:10:11,611] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-13 07:10:11,631] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-13 07:10:11,631] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,636] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,640] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,643] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,658] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-13 07:10:11,660] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,665] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,672] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-13 07:10:11,683] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,684] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-13 07:10:11,684] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,684] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,685] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,688] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,688] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,689] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,689] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-13 07:10:11,689] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-13 07:10:11,690] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,694] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-13 07:10:11,698] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-13 07:10:11,707] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 07:10:11,709] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 07:10:11,712] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 07:10:11,712] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 07:10:11,712] INFO Kafka startTimeMs: 1749798611706 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-13 07:10:11,713] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-13 07:10:11,714] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 07:10:11,715] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-13 07:10:11,716] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 07:10:11,716] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 07:10:11,718] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-13 07:10:11,721] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-13 07:10:11,725] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,733] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,734] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,734] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,735] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,737] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,749] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:11,774] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 07:10:11,819] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 07:10:11,819] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-13 07:10:16,751] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:16,752] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:38,265] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:38,267] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 07:10:38,267] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 07:10:38,273] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:38,313] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(NV9x-EkxTk6JYV4-bDt8bA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(5Oc_2tE9TPS4rMFE96sG-Q),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:38,314] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-13 07:10:38,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,322] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:10:38,322] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,327] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,328] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,329] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,330] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,331] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,332] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,332] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,332] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:10:38,332] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,455] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,456] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,457] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:10:38,459] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 07:10:38,459] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 07:10:38,459] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 07:10:38,459] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 07:10:38,459] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 07:10:38,460] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 07:10:38,461] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 07:10:38,462] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 07:10:38,463] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 07:10:38,464] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 07:10:38,465] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 07:10:38,467] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,468] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,474] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,478] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:10:38,479] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,482] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 07:10:38,524] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 07:10:38,525] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 07:10:38,526] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 07:10:38,527] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 07:10:38,528] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 07:10:38,528] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-13 07:10:38,572] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,585] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,587] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,589] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,590] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,604] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,605] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,605] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,605] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,605] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,613] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,614] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,615] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,615] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,615] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,625] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,626] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,626] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,627] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,627] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,636] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,637] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,637] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,638] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,638] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,647] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,648] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,648] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,648] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,648] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,656] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,657] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,658] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,658] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,658] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,666] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,667] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,667] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,668] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,668] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,675] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,676] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,677] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,677] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,677] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,686] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,687] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,687] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,687] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,687] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,695] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,696] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,696] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,696] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,696] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,705] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,706] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,706] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,706] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,706] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,713] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,714] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,714] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,714] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,715] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,723] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,724] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,724] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,724] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,724] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,734] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,735] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,735] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,735] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,736] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,745] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,746] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,746] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,746] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,746] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,756] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,757] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,757] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,757] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,757] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,765] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,765] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,766] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,766] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,766] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,773] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,774] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,774] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,774] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,774] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,783] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,784] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,784] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,784] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,784] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,793] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,793] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,793] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,793] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,793] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,801] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,802] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,802] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,802] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,802] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,811] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,812] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,812] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,812] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,812] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,821] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,822] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,822] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,822] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,822] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,831] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,832] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,832] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,832] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,832] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,841] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,842] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,842] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,842] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,842] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,851] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,852] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,852] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,852] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,852] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,860] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,862] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,862] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,862] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,862] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,871] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,872] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,872] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,872] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,872] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,881] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,882] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,882] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,882] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,882] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,892] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,893] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,893] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,893] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,893] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,904] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,906] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,906] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,906] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,906] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,914] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,914] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,914] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,914] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,914] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,922] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,923] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,923] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,923] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,923] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,930] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,931] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,931] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,931] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,931] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(NV9x-EkxTk6JYV4-bDt8bA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,937] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,937] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,937] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,937] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,937] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,946] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,947] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,947] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,947] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,947] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,955] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,956] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,956] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,956] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,956] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,965] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,967] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,967] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,967] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,967] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,981] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,982] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,982] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,982] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,982] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:38,991] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:38,992] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:38,992] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,992] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:38,992] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,000] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,001] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,001] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,001] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,001] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,009] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,010] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,010] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,010] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,010] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,018] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,019] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,019] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,019] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,019] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,028] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,028] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,028] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,029] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,029] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,037] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,038] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,038] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,038] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,038] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,045] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,046] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,046] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,046] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,046] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,053] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,054] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,054] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,054] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,054] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,063] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,065] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,065] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,065] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,065] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,074] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,075] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,075] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,075] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,075] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,084] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:10:39,084] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-13 07:10:39,084] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,084] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:10:39,084] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(5Oc_2tE9TPS4rMFE96sG-Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-13 07:10:39,089] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-13 07:10:39,090] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-13 07:10:39,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,102] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,104] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,104] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,105] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,105] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,106] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,106] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,107] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,107] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,108] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,108] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,110] INFO [Broker id=1] Finished LeaderAndIsr request in 637ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-13 07:10:39,111] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,112] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,113] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,115] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=5Oc_2tE9TPS4rMFE96sG-Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=NV9x-EkxTk6JYV4-bDt8bA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 07:10:39,115] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,116] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,117] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,118] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,119] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,120] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,121] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,126] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,127] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,128] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,129] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,129] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,129] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-13 07:10:39,130] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 07:10:39,751] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:39,766] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:40,027] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group fddc67c9-f8de-4e3f-9662-92c761a4150d in Empty state. Created a new member id consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:40,030] INFO [GroupCoordinator 1]: Preparing to rebalance group fddc67c9-f8de-4e3f-9662-92c761a4150d in state PreparingRebalance with old generation 0 (__consumer_offsets-31) (reason: Adding new member consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359 with group instance id None; client reason: need to re-join with the given member-id: consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:42,778] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:42,800] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:43,031] INFO [GroupCoordinator 1]: Stabilized group fddc67c9-f8de-4e3f-9662-92c761a4150d generation 1 (__consumer_offsets-31) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:10:43,036] INFO [GroupCoordinator 1]: Assignment received from leader consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359 for group fddc67c9-f8de-4e3f-9662-92c761a4150d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:11:22,971] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-7b481954-04c2-42a4-9c40-5aebc5169a3c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:11:22,972] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-7b481954-04c2-42a4-9c40-5aebc5169a3c with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:11:25,973] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:11:25,976] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-7b481954-04c2-42a4-9c40-5aebc5169a3c for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:12:33,685] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-13 07:12:33,698] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(rftaw9K4SCuU8W867x59Yg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-13 07:12:33,698] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-13 07:12:33,698] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-13 07:12:33,698] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,698] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-13 07:12:33,698] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,705] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-13 07:12:33,705] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 07:12:33,705] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-13 07:12:33,705] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,705] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-13 07:12:33,705] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,706] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,707] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-13 07:12:33,707] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 07:12:33,708] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-13 07:12:33,708] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,712] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-13 07:12:33,713] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-13 07:12:33,714] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-13 07:12:33,714] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-13 07:12:33,714] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(rftaw9K4SCuU8W867x59Yg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-13 07:12:33,718] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-13 07:12:33,719] INFO [Broker id=1] Finished LeaderAndIsr request in 13ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-13 07:12:33,719] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=rftaw9K4SCuU8W867x59Yg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 07:12:33,721] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 07:12:33,721] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-13 07:12:33,722] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-13 07:14:13,003] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-aedf2ca8-e738-476a-a815-2c83a6c7b453 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:13,004] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-aedf2ca8-e738-476a-a815-2c83a6c7b453 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:16,005] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:16,007] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-aedf2ca8-e738-476a-a815-2c83a6c7b453 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:16,122] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-aedf2ca8-e738-476a-a815-2c83a6c7b453 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:16,123] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:16,125] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-aedf2ca8-e738-476a-a815-2c83a6c7b453, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:38,713] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-2ed900a6-36d9-4c2a-a07f-5842e2e38113 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:38,714] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-2ed900a6-36d9-4c2a-a07f-5842e2e38113 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:41,715] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:41,718] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-2ed900a6-36d9-4c2a-a07f-5842e2e38113 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:41,726] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-2ed900a6-36d9-4c2a-a07f-5842e2e38113 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:41,726] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:14:41,727] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-2ed900a6-36d9-4c2a-a07f-5842e2e38113, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:04,271] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-b024653c-e931-491a-8fdb-1c83c8a9a226 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:04,273] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-b024653c-e931-491a-8fdb-1c83c8a9a226 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:07,274] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:07,277] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-b024653c-e931-491a-8fdb-1c83c8a9a226 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:07,284] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-b024653c-e931-491a-8fdb-1c83c8a9a226 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:07,284] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:07,285] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-b024653c-e931-491a-8fdb-1c83c8a9a226, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.5, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-13 07:15:16,754] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-13 07:15:16,754] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-13 07:15:16,759] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2025-06-13 07:15:16,760] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.5:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-13T07:10:17.404+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-13T07:10:17.483+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 37 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-13T07:10:17.484+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-13T07:10:18.897+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-13T07:10:19.057+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 149 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-13T07:10:19.749+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-13T07:10:19.762+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T07:10:19.763+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-13T07:10:19.764+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-13T07:10:19.800+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-13T07:10:19.800+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2256 ms policy-api | [2025-06-13T07:10:20.128+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-13T07:10:20.214+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-13T07:10:20.262+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-13T07:10:20.651+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-13T07:10:20.702+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-13T07:10:20.927+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c policy-api | [2025-06-13T07:10:20.929+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-13T07:10:21.016+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-13T07:10:23.153+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-13T07:10:23.156+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-13T07:10:23.822+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-13T07:10:24.720+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-13T07:10:25.801+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-13T07:10:25.853+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-13T07:10:26.550+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-13T07:10:26.692+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-13T07:10:26.716+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-13T07:10:26.738+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.086 seconds (process running for 10.686) policy-api | [2025-06-13T07:10:39.919+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-13T07:10:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-13T07:10:39.921+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-13T07:13:50.864+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-api | [] policy-api | [2025-06-13T07:15:07.601+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity policy-api | policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.708229 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.765862 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.82842 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.885417 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.936287 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:04.988225 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.04717 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.091305 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.142429 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.207668 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.2608 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.315698 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.37324 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.420854 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.478992 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.532723 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.587128 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.635439 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.687736 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.744819 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.805135 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.858467 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.918548 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:05.971755 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.033431 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.109649 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.168489 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.238905 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.298103 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.357133 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.402636 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.454003 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.500895 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.553845 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.616522 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.674481 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.740775 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.794569 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.853783 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.916515 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:06.975287 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.035448 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.084016 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.141001 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.198124 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.258656 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.322025 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.37707 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.431159 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.493065 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.556822 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.605801 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.656394 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.707319 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.760757 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.820101 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.880724 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:07.951904 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.008446 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.066548 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.122412 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.167918 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.222459 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.280994 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.341294 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.413673 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.464352 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.514425 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.573144 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.625609 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.677969 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.735609 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.788516 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.838627 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.891137 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.945941 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:08.995814 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.047996 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.10032 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.149278 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.193464 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.239459 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.298754 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.351723 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.406659 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.45414 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.499732 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.548403 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.611011 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.654786 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.695956 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.747609 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.80903 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.863252 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.907372 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306250710040800u | 1 | 2025-06-13 07:10:09.956857 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.008342 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.062685 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.115664 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.160904 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.216154 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.270833 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.325543 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.377713 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.421156 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.476805 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.524322 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.582599 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306250710040900u | 1 | 2025-06-13 07:10:10.626666 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.682471 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.735477 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.78077 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.844841 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.900139 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:10.951763 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:11.003361 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:11.053283 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306250710041000u | 1 | 2025-06-13 07:10:11.096195 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306250710041100u | 1 | 2025-06-13 07:10:11.152836 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306250710041200u | 1 | 2025-06-13 07:10:11.20136 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306250710041200u | 1 | 2025-06-13 07:10:11.248275 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306250710041200u | 1 | 2025-06-13 07:10:11.302326 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306250710041200u | 1 | 2025-06-13 07:10:11.357101 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306250710041300u | 1 | 2025-06-13 07:10:11.40623 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306250710041300u | 1 | 2025-06-13 07:10:11.451955 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306250710041300u | 1 | 2025-06-13 07:10:11.500122 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.166674 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.218184 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.27877 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.33639 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.385568 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.451548 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.501049 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.556505 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.606856 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.658421 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.711875 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.766186 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306250710121400u | 1 | 2025-06-13 07:10:12.81866 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:12.872742 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:12.924012 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:12.992528 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:13.045665 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:13.098366 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:13.152438 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:13.200134 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306250710121500u | 1 | 2025-06-13 07:10:13.248921 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306250710121600u | 1 | 2025-06-13 07:10:13.303639 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306250710121600u | 1 | 2025-06-13 07:10:13.354919 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306250710121601u | 1 | 2025-06-13 07:10:13.407951 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306250710121601u | 1 | 2025-06-13 07:10:13.457667 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306250710121700u | 1 | 2025-06-13 07:10:13.514937 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306250710121700u | 1 | 2025-06-13 07:10:13.573574 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306250710121700u | 1 | 2025-06-13 07:10:13.627616 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.683341 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.742002 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.795796 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.852016 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.90698 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:13.951691 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:14.008937 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:14.057651 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306250710121701u | 1 | 2025-06-13 07:10:14.101086 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306250710141600u | 1 | 2025-06-13 07:10:14.774961 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306250710151600u | 1 | 2025-06-13 07:10:15.4213 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306250710151600u | 1 | 2025-06-13 07:10:15.486598 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-opa-pdp | Waiting for kafka port 9092... policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | Connection to kafka (172.17.0.8) 9092 port [tcp/*] succeeded! policy-opa-pdp | Waiting for pap port 6969... policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="OPA-PDP: Starting initialisation " policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="KAFKA_URL not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="PAP_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="PATCH_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="PATCH_GROUPID not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="API_USER not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="API_PASSWORD not defined, using default value" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="UseSASLForKAFKA not defined, using default value" policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="Username: " policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="Password: " policy-opa-pdp | time="2025-06-13T07:11:17Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" policy-opa-pdp | time="2025-06-13T07:11:17Z" level=debug msg="Configuration module: environment initialised" policy-opa-pdp | DEBU[2025-06-13T07:11:17.9433+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug policy-opa-pdp | DEBU[2025-06-13T07:11:17.9437+00:00] Name: opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad policy-opa-pdp | DEBU[2025-06-13T07:11:17.9473+00:00] Starting OPA PDP Service policy-opa-pdp | INFO[2025-06-13T07:11:22.9483+00:00] HTTP server started policy-opa-pdp | DEBU[2025-06-13T07:11:22.9494+00:00] Create an instance of OPA Object policy-opa-pdp | DEBU[2025-06-13T07:11:22.9495+00:00] Configure an instance of OPA Object policy-opa-pdp | DEBU[2025-06-13T07:11:22.9506+00:00] Topic start :::: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-13T07:11:22.9507+00:00] Creating Kafka Consumer singleton instance policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-13T07:11:22.9530+00:00] Topic Subscribed: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-13T07:11:22.9531+00:00] Created SIngleton consumer instance policy-opa-pdp | DEBU[2025-06-13T07:11:22.9610+00:00] Starting PDP Message Listener..... policy-opa-pdp | DEBU[2025-06-13T07:11:32.9703+00:00] New Ticker started with interval 60000 policy-opa-pdp | DEBU[2025-06-13T07:11:42.9779+00:00] After registration successful delay policy-opa-pdp | DEBU[2025-06-13T07:12:32.9821+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"b48beb6f-bbaf-468a-8cc4-2505213f4cd6","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749798752981","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:12:32.9822+00:00] Sending Heartbeat ... policy-opa-pdp | 2025/06/13 07:12:32 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:12:33.0073+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"b48beb6f-bbaf-468a-8cc4-2505213f4cd6","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749798752981","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:12:33.0076+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:33.0077+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:33.6150+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"65c680c2-7580-4342-a71d-4215fb6c0c76","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6154+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:12:33.6159+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"65c680c2-7580-4342-a71d-4215fb6c0c76","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6159+00:00] Policy Is Allowed: slice.capacity.check policy-opa-pdp | DEBU[2025-06-13T07:12:33.6159+00:00] Validating properties data for policy: slice.capacity.check policy-opa-pdp | DEBU[2025-06-13T07:12:33.6159+00:00] Validating properties policy for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-13T07:12:33.6159+00:00] Validation successful for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-13T07:12:33.6164+00:00] Directory created: /opt/policies/slice/capacity/check policy-opa-pdp | INFO[2025-06-13T07:12:33.6164+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego policy-opa-pdp | INFO[2025-06-13T07:12:33.6175+00:00] Directory created: /opt/data/node/slice/capacity/check policy-opa-pdp | INFO[2025-06-13T07:12:33.6179+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json policy-opa-pdp | DEBU[2025-06-13T07:12:33.6183+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-13T07:12:33.6384+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-13T07:12:33.6412+00:00] storage not found creating : /node policy-opa-pdp | DEBU[2025-06-13T07:12:33.6414+00:00] storage not found creating : /node/slice policy-opa-pdp | DEBU[2025-06-13T07:12:33.6415+00:00] storage not found creating : /node/slice/capacity policy-opa-pdp | DEBU[2025-06-13T07:12:33.6417+00:00] storage not found creating : /node/slice/capacity/check policy-opa-pdp | INFO[2025-06-13T07:12:33.6419+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:12:33.6420+00:00] Loaded Policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-13T07:12:33.6422+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-13T07:12:33.6423+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:12:33 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:12:33.6426+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"65c680c2-7580-4342-a71d-4215fb6c0c76","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"1363f3b3-f7ea-441b-8a2d-db94a2cbf351","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753642","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:12:33.6427+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:12:33.6428+00:00] 120000 policy-opa-pdp | DEBU[2025-06-13T07:12:33.6430+00:00] New Ticker started with interval 120000 policy-opa-pdp | DEBU[2025-06-13T07:12:33.6509+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"65c680c2-7580-4342-a71d-4215fb6c0c76","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"1363f3b3-f7ea-441b-8a2d-db94a2cbf351","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753642","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6510+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:33.6510+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:33.6822+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6822+00:00] messageType: PDP_STATE_CHANGE policy-opa-pdp | DEBU[2025-06-13T07:12:33.6823+00:00] PDP STATE CHANGE message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6823+00:00] State change from PASSIVE To : ACTIVE policy-opa-pdp | INFO[2025-06-13T07:12:33.6823+00:00] Sending PDP Status With State Change response policy-opa-pdp | 2025/06/13 07:12:33 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:12:33.6824+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"9b286f27-e478-4da7-8ec7-024b02d048c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753682","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:12:33.6824+00:00] PDP_STATUS With State Change Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:12:33.6903+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"9b286f27-e478-4da7-8ec7-024b02d048c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753682","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:12:33.6904+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:33.6904+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:34.0024+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","timestampMs":1749798753983,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:12:34.0024+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:12:34.0026+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","timestampMs":1749798753983,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-13T07:12:34.0026+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:12:34 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:12:34.0027+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"bb345fb7-a0ec-4fa1-8011-106aa18c38b5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798754002","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:12:34.0028+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:12:34.0028+00:00] 120000 policy-opa-pdp | DEBU[2025-06-13T07:12:34.0100+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"bb345fb7-a0ec-4fa1-8011-106aa18c38b5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798754002","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:12:34.0101+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:12:34.0101+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/13 07:13:32 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:13:32.9782+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"692c21fa-4a7d-4b0c-933f-0de051ddcde9","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798812978","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:13:32.9783+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-13T07:13:32.9886+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"692c21fa-4a7d-4b0c-933f-0de051ddcde9","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798812978","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:13:32.9887+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:13:32.9888+00:00] discarding event of type PDP_STATUS policy-opa-pdp | WARN[2025-06-13T07:13:50.6118+00:00] Invalid or Missing Request ID policy-opa-pdp | DEBU[2025-06-13T07:13:50.6139+00:00] Received Health Check message policy-opa-pdp | INFO[2025-06-13T07:13:50.6295+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:13:50.6297+00:00] datapath to get Data : / policy-opa-pdp | DEBU[2025-06-13T07:13:50.6297+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} policy-opa-pdp | DEBU[2025-06-13T07:13:51.9672+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8dd374a9-79bd-438c-9600-8a6cda77dc82","timestampMs":1749798831907,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:13:51.9673+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:13:51.9675+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8dd374a9-79bd-438c-9600-8a6cda77dc82","timestampMs":1749798831907,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:13:51.9675+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:13:51.9676+00:00] Policy is new and should be deployed: zoneB 1.0.6 policy-opa-pdp | DEBU[2025-06-13T07:13:51.9676+00:00] Policy Is Allowed: zoneB policy-opa-pdp | DEBU[2025-06-13T07:13:51.9676+00:00] Validating properties data for policy: zoneB policy-opa-pdp | DEBU[2025-06-13T07:13:51.9676+00:00] Validating properties policy for policy: zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9676+00:00] Validation successful for policy: zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9678+00:00] Directory created: /opt/policies/zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9679+00:00] Policy file saved: /opt/policies/zoneB/policy.rego policy-opa-pdp | INFO[2025-06-13T07:13:51.9679+00:00] Directory created: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9680+00:00] Data file saved: /opt/data/node/zoneB/data.json policy-opa-pdp | DEBU[2025-06-13T07:13:51.9680+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-13T07:13:51.9849+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-13T07:13:51.9900+00:00] storage not found creating : /node/zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9902+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "zoneB", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:13:51.9902+00:00] Loaded Policy: zoneB policy-opa-pdp | INFO[2025-06-13T07:13:51.9902+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-13T07:13:51.9903+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-13T07:13:51.9904+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8dd374a9-79bd-438c-9600-8a6cda77dc82","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"2af62cb2-4297-4abf-a27d-0aa261166fc1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798831990","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:13:51.9904+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | 2025/06/13 07:13:51 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:13:51.9904+00:00] 0 policy-opa-pdp | DEBU[2025-06-13T07:13:51.9985+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8dd374a9-79bd-438c-9600-8a6cda77dc82","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"2af62cb2-4297-4abf-a27d-0aa261166fc1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798831990","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:13:51.9986+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:13:51.9988+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-13T07:14:16.1435+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:16.1436+00:00] datapath to get Data : /node/zoneB/zone policy-opa-pdp | DEBU[2025-06-13T07:14:16.1437+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} policy-opa-pdp | DEBU[2025-06-13T07:14:16.1544+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:16.1545+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:16.1548+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:14:16.1548+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"b28464f4-ac6c-4253-9bda-2d13811abce0","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":510,"timer_rego_query_compile_ns":105832,"timer_rego_query_eval_ns":307566,"timer_rego_query_parse_ns":72632,"timer_sdk_decision_eval_ns":627214},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-13T07:14:16Z","timestamp":"2025-06-13T07:14:16.154873138Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-13T07:14:16.1558+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "b28464f4-ac6c-4253-9bda-2d13811abce0", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:16.1697+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:16.1698+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:16.1701+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-13T07:14:16.1701+00:00] Policy Name zoeB does not exist policy-opa-pdp | DEBU[2025-06-13T07:14:16.1767+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:16.1767+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:16.1770+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:14:16.1771+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"be1f8136-973e-4cad-8d4f-30b04faeea86","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1100,"timer_rego_query_eval_ns":558162,"timer_sdk_decision_eval_ns":709216},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-13T07:14:16Z","timestamp":"2025-06-13T07:14:16.177231993Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-13T07:14:16.1782+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "be1f8136-973e-4cad-8d4f-30b04faeea86", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:16.5181+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"392c61ad-08d0-4468-893f-238210710be1","timestampMs":1749798856469,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:16.5182+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:14:16.5184+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"392c61ad-08d0-4468-893f-238210710be1","timestampMs":1749798856469,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-13T07:14:16.5184+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-13T07:14:16.5184+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-13T07:14:16.5184+00:00] Deleting Policy from OPA : /zoneB policy-opa-pdp | DEBU[2025-06-13T07:14:16.5210+00:00] Removing policy directory: /opt/policies/zoneB policy-opa-pdp | DEBU[2025-06-13T07:14:16.5212+00:00] Deleting data from OPA : /node/zoneB policy-opa-pdp | DEBU[2025-06-13T07:14:16.5212+00:00] Analyzing dataPath: /node/zoneB policy-opa-pdp | DEBU[2025-06-13T07:14:16.5212+00:00] Path segments: [ node zoneB] policy-opa-pdp | DEBU[2025-06-13T07:14:16.5212+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB policy-opa-pdp | DEBU[2025-06-13T07:14:16.5213+00:00] Removing data directory: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-13T07:14:16.5214+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:16.5215+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:14:16.5215+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-13T07:14:16.5215+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:14:16 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:14:16.5216+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"392c61ad-08d0-4468-893f-238210710be1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"cb9c9d82-0b9f-4710-b67c-58e6f0db2328","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798856521","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:14:16.5216+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:14:16.5216+00:00] 0 policy-opa-pdp | DEBU[2025-06-13T07:14:16.5304+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"392c61ad-08d0-4468-893f-238210710be1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"cb9c9d82-0b9f-4710-b67c-58e6f0db2328","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798856521","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:16.5305+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:16.5306+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:17.6740+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","timestampMs":1749798857655,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:17.6743+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:14:17.6746+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","timestampMs":1749798857655,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:17.6747+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:14:17.6748+00:00] Policy is new and should be deployed: vehicle 1.0.6 policy-opa-pdp | DEBU[2025-06-13T07:14:17.6749+00:00] Policy Is Allowed: vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:17.6750+00:00] Validating properties data for policy: vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:17.6751+00:00] Validating properties policy for policy: vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.6752+00:00] Validation successful for policy: vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.6754+00:00] Directory created: /opt/policies/vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.6756+00:00] Policy file saved: /opt/policies/vehicle/policy.rego policy-opa-pdp | INFO[2025-06-13T07:14:17.6758+00:00] Directory created: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.6759+00:00] Data file saved: /opt/data/node/vehicle/data.json policy-opa-pdp | DEBU[2025-06-13T07:14:17.6760+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-13T07:14:17.6931+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-13T07:14:17.6999+00:00] storage not found creating : /node/vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.7001+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "vehicle", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:17.7001+00:00] Loaded Policy: vehicle policy-opa-pdp | INFO[2025-06-13T07:14:17.7001+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-13T07:14:17.7003+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:14:17 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:14:17.7004+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"7a1b17c2-cc1f-445b-8d13-8f973ec4aee0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798857700","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:14:17.7004+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:14:17.7004+00:00] 0 policy-opa-pdp | DEBU[2025-06-13T07:14:17.7097+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"7a1b17c2-cc1f-445b-8d13-8f973ec4aee0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798857700","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:17.7097+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:17.7098+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/13 07:14:33 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:14:33.6445+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"8253ca4b-6c58-4c20-8302-22c6df8476c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798873643","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:33.6445+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-13T07:14:33.6534+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"8253ca4b-6c58-4c20-8302-22c6df8476c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798873643","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:33.6537+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:33.6538+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-13T07:14:41.7470+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7471+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7471+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-13T07:14:41.7600+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7604+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-13T07:14:41.7605+00:00] data : [map[op:add path:/round value:trail]] policy-opa-pdp | INFO[2025-06-13T07:14:41.7605+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7605+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-13T07:14:41.7606+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-13T07:14:41.7608+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-13T07:14:41.7608+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7608+00:00] path : round policy-opa-pdp | INFO[2025-06-13T07:14:41.7608+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-13T07:14:41.7609+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-13T07:14:41.7609+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-13T07:14:41.7694+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7695+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7696+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-13T07:14:41.7798+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7800+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-13T07:14:41.7801+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] policy-opa-pdp | INFO[2025-06-13T07:14:41.7801+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7802+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-13T07:14:41.7802+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-13T07:14:41.7802+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-13T07:14:41.7802+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7802+00:00] path : round policy-opa-pdp | INFO[2025-06-13T07:14:41.7802+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-13T07:14:41.7803+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-13T07:14:41.7803+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-13T07:14:41.7869+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7869+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7870+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-13T07:14:41.7967+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.7969+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-13T07:14:41.7970+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-13T07:14:41.7970+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7970+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-13T07:14:41.7970+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-13T07:14:41.7971+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-13T07:14:41.7971+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.7971+00:00] path : round policy-opa-pdp | INFO[2025-06-13T07:14:41.7971+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-13T07:14:41.7972+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-13T07:14:41.7972+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-13T07:14:41.8039+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:41.8040+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:41.8040+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | DEBU[2025-06-13T07:14:41.8136+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:41.8137+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:41.8140+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:14:41.8141+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-13T07:14:41.8162+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "efb3476b-3b92-4b98-af52-dc66c878d694", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | {"decision_id":"efb3476b-3b92-4b98-af52-dc66c878d694","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1040,"timer_rego_query_compile_ns":153793,"timer_rego_query_eval_ns":1047744,"timer_rego_query_parse_ns":143383,"timer_sdk_decision_eval_ns":1531303},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-13T07:14:41Z","timestamp":"2025-06-13T07:14:41.814198995Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-13T07:14:41.8263+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:41.8263+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:41.8266+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-13T07:14:41.8266+00:00] Policy Name vehile does not exist policy-opa-pdp | DEBU[2025-06-13T07:14:41.8329+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:14:41.8330+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:14:41.8335+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:14:41.8336+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"851fdd7a-ef79-493f-ad60-49fc9a5c3c4f","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1050,"timer_rego_query_eval_ns":555452,"timer_sdk_decision_eval_ns":710045},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-13T07:14:41Z","timestamp":"2025-06-13T07:14:41.833857786Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-13T07:14:41.8348+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "851fdd7a-ef79-493f-ad60-49fc9a5c3c4f", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:42.1066+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"39631ade-9478-410a-bff4-af88c805ba72","timestampMs":1749798882080,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:42.1068+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:14:42.1071+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"39631ade-9478-410a-bff4-af88c805ba72","timestampMs":1749798882080,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-13T07:14:42.1071+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-13T07:14:42.1072+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-13T07:14:42.1073+00:00] Deleting Policy from OPA : /vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.1099+00:00] Removing policy directory: /opt/policies/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.1102+00:00] Deleting data from OPA : /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.1103+00:00] Analyzing dataPath: /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.1104+00:00] Path segments: [ node vehicle] policy-opa-pdp | DEBU[2025-06-13T07:14:42.1104+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.1105+00:00] Removing data directory: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-13T07:14:42.1108+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:42.1108+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:14:42.1110+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-13T07:14:42.1111+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:14:42 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:14:42.1113+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39631ade-9478-410a-bff4-af88c805ba72","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"322086b3-771a-48e8-890a-3750df1a94e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798882111","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:14:42.1113+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:14:42.1114+00:00] 0 policy-opa-pdp | DEBU[2025-06-13T07:14:42.1187+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39631ade-9478-410a-bff4-af88c805ba72","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"322086b3-771a-48e8-890a-3750df1a94e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798882111","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:42.1189+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:42.1189+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-13T07:14:42.4893+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:14:42.4894+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | WARN[2025-06-13T07:14:42.4894+00:00] Error in reading data under /node/vehicle path policy-opa-pdp | ERRO[2025-06-13T07:14:42.4894+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist policy-opa-pdp | INFO[2025-06-13T07:14:42.5038+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-13T07:14:42.5040+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-13T07:14:42.5040+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-13T07:14:42.5040+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-13T07:14:42.5041+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] policy-opa-pdp | ERRO[2025-06-13T07:14:42.5041+00:00] Policy associated with the patch request does not exists policy-opa-pdp | DEBU[2025-06-13T07:14:43.2100+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"11344b48-77c7-4139-bbbe-69b82470a72d","timestampMs":1749798883192,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:43.2105+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:14:43.2108+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"11344b48-77c7-4139-bbbe-69b82470a72d","timestampMs":1749798883192,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:14:43.2109+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:14:43.2110+00:00] Policy is new and should be deployed: abac 1.0.7 policy-opa-pdp | DEBU[2025-06-13T07:14:43.2111+00:00] Policy Is Allowed: abac policy-opa-pdp | DEBU[2025-06-13T07:14:43.2112+00:00] Validating properties data for policy: abac policy-opa-pdp | DEBU[2025-06-13T07:14:43.2112+00:00] Validating properties policy for policy: abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2113+00:00] Validation successful for policy: abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2115+00:00] Directory created: /opt/policies/abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2116+00:00] Policy file saved: /opt/policies/abac/policy.rego policy-opa-pdp | INFO[2025-06-13T07:14:43.2118+00:00] Directory created: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2119+00:00] Data file saved: /opt/data/node/abac/data.json policy-opa-pdp | DEBU[2025-06-13T07:14:43.2120+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-13T07:14:43.2291+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-13T07:14:43.2353+00:00] storage not found creating : /node/abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2356+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.abac" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "abac" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "abac", policy-opa-pdp | "policy-version": "1.0.7" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:14:43.2356+00:00] Loaded Policy: abac policy-opa-pdp | INFO[2025-06-13T07:14:43.2357+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-13T07:14:43.2357+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/13 07:14:43 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:14:43.2358+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"11344b48-77c7-4139-bbbe-69b82470a72d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"3738cb0b-7fa0-4b50-af21-12b1d64dad09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798883235","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:14:43.2358+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:14:43.2359+00:00] 0 policy-opa-pdp | DEBU[2025-06-13T07:14:43.2485+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"11344b48-77c7-4139-bbbe-69b82470a72d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"3738cb0b-7fa0-4b50-af21-12b1d64dad09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798883235","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:14:43.2487+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:14:43.2487+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-13T07:15:07.3047+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-13T07:15:07.3048+00:00] datapath to get Data : /node/abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.3049+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} policy-opa-pdp | DEBU[2025-06-13T07:15:07.3149+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:15:07.3150+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:15:07.3155+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:15:07.3156+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-13T07:15:07.3175+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "eb20c5e9-3e77-467c-9445-90901d48e48f", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | {"decision_id":"eb20c5e9-3e77-467c-9445-90901d48e48f","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":870,"timer_rego_query_compile_ns":166534,"timer_rego_query_eval_ns":721676,"timer_rego_query_parse_ns":118023,"timer_sdk_decision_eval_ns":1200617},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-13T07:15:07Z","timestamp":"2025-06-13T07:15:07.315875608Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:15:07.3243+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:15:07.3243+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:15:07.3246+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-13T07:15:07.3246+00:00] Policy Name abc does not exist policy-opa-pdp | DEBU[2025-06-13T07:15:07.3309+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-13T07:15:07.3310+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-13T07:15:07.3313+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-13T07:15:07.3314+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"86de70cf-7826-46e7-b56a-232737494c45","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"f7648fea-fb9e-4dde-99a5-3b02c0e44ade","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1120,"timer_rego_query_eval_ns":994903,"timer_sdk_decision_eval_ns":1119765},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-13T07:15:07Z","timestamp":"2025-06-13T07:15:07.331460987Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-13T07:15:07.3329+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "86de70cf-7826-46e7-b56a-232737494c45", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:15:07.8751+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","timestampMs":1749798907854,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-13T07:15:07.8752+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-13T07:15:07.8753+00:00] PDP_UPDATE Message received: {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","timestampMs":1749798907854,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-13T07:15:07.8753+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-13T07:15:07.8753+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment policy-opa-pdp | DEBU[2025-06-13T07:15:07.8754+00:00] Deleting Policy from OPA : /abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.8771+00:00] Removing policy directory: /opt/policies/abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.8778+00:00] Deleting data from OPA : /node/abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.8778+00:00] Analyzing dataPath: /node/abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.8778+00:00] Path segments: [ node abac] policy-opa-pdp | DEBU[2025-06-13T07:15:07.8779+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac policy-opa-pdp | DEBU[2025-06-13T07:15:07.8779+00:00] Removing data directory: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-13T07:15:07.8780+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-13T07:15:07.8780+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-13T07:15:07.8780+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-13T07:15:07.8780+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-13T07:15:07.8781+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"97a69133-831c-493a-a449-15747fbb9288","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798907878","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-13T07:15:07.8781+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-13T07:15:07.8781+00:00] 0 policy-opa-pdp | 2025/06/13 07:15:07 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-13T07:15:07.8857+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"97a69133-831c-493a-a449-15747fbb9288","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798907878","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-13T07:15:07.8858+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-13T07:15:07.8858+00:00] discarding event of type PDP_STATUS policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.6:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.8:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-13T07:10:28.655+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 52 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-13T07:10:28.656+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-13T07:10:30.026+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-13T07:10:30.127+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-13T07:10:31.124+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-13T07:10:31.138+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T07:10:31.140+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-13T07:10:31.140+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-13T07:10:31.198+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-13T07:10:31.199+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2490 ms policy-pap | [2025-06-13T07:10:31.614+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-13T07:10:31.690+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-13T07:10:31.737+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-13T07:10:32.140+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-13T07:10:32.193+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-13T07:10:32.406+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@4769378c policy-pap | [2025-06-13T07:10:32.408+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-13T07:10:32.502+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-13T07:10:34.538+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-13T07:10:34.543+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-13T07:10:35.765+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = fddc67c9-f8de-4e3f-9662-92c761a4150d policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T07:10:35.820+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:35.968+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:35.968+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:35.968+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798635967 policy-pap | [2025-06-13T07:10:35.971+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-1, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T07:10:35.971+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T07:10:35.972+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:35.980+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:35.980+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:35.980+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798635980 policy-pap | [2025-06-13T07:10:35.980+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T07:10:36.324+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-13T07:10:36.469+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-13T07:10:36.563+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-13T07:10:36.791+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-13T07:10:37.538+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-13T07:10:37.647+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-13T07:10:37.663+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-13T07:10:37.687+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-13T07:10:37.688+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-13T07:10:37.688+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-13T07:10:37.689+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-13T07:10:37.689+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-13T07:10:37.690+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-13T07:10:37.690+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-13T07:10:37.701+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddc67c9-f8de-4e3f-9662-92c761a4150d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4770e50a policy-pap | [2025-06-13T07:10:37.714+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddc67c9-f8de-4e3f-9662-92c761a4150d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T07:10:37.715+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = fddc67c9-f8de-4e3f-9662-92c761a4150d policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T07:10:37.715+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:37.723+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:37.723+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:37.723+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798637723 policy-pap | [2025-06-13T07:10:37.723+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T07:10:37.724+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-13T07:10:37.724+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cc686425-afa8-46ba-838f-e4255adc7b15, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3d1f6213 policy-pap | [2025-06-13T07:10:37.724+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cc686425-afa8-46ba-838f-e4255adc7b15, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T07:10:37.725+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-13T07:10:37.725+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:37.730+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:37.730+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:37.730+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798637730 policy-pap | [2025-06-13T07:10:37.731+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-13T07:10:37.731+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-13T07:10:37.731+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cc686425-afa8-46ba-838f-e4255adc7b15, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T07:10:37.731+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddc67c9-f8de-4e3f-9662-92c761a4150d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-13T07:10:37.731+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=19d8ecd4-fa2d-45e7-89f4-283aaa2fb6e6, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T07:10:37.742+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T07:10:37.743+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:37.755+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-13T07:10:37.774+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:37.774+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:37.774+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798637774 policy-pap | [2025-06-13T07:10:37.775+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=19d8ecd4-fa2d-45e7-89f4-283aaa2fb6e6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T07:10:37.775+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9412f993-f750-4d50-9438-37e7c44f21e6, alive=false, publisher=null]]: starting policy-pap | [2025-06-13T07:10:37.776+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-13T07:10:37.776+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-13T07:10:37.777+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-13T07:10:37.781+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-13T07:10:37.781+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-13T07:10:37.781+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749798637781 policy-pap | [2025-06-13T07:10:37.782+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9412f993-f750-4d50-9438-37e7c44f21e6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-13T07:10:37.782+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-13T07:10:37.782+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-13T07:10:37.785+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-13T07:10:37.785+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-13T07:10:37.786+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-13T07:10:37.786+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-13T07:10:37.789+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-13T07:10:37.790+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-13T07:10:37.791+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-13T07:10:37.792+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-13T07:10:37.792+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-13T07:10:37.792+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.929 seconds (process running for 10.481) policy-pap | [2025-06-13T07:10:38.239+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: NTRvZRYCTeeyE7gtQqKPJg policy-pap | [2025-06-13T07:10:38.248+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-13T07:10:38.248+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: NTRvZRYCTeeyE7gtQqKPJg policy-pap | [2025-06-13T07:10:38.249+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: NTRvZRYCTeeyE7gtQqKPJg policy-pap | [2025-06-13T07:10:38.279+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-13T07:10:38.279+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-13T07:10:38.292+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:38.293+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Cluster ID: NTRvZRYCTeeyE7gtQqKPJg policy-pap | [2025-06-13T07:10:38.396+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:38.411+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:38.591+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:38.660+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:38.964+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:39.097+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:10:39.720+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T07:10:39.726+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T07:10:39.757+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73 policy-pap | [2025-06-13T07:10:39.757+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-13T07:10:40.019+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-13T07:10:40.023+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] (Re-)joining group policy-pap | [2025-06-13T07:10:40.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Request joining group due to: need to re-join with the given member-id: consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359 policy-pap | [2025-06-13T07:10:40.028+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] (Re-)joining group policy-pap | [2025-06-13T07:10:41.610+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-13T07:10:41.610+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-13T07:10:41.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 3 ms policy-pap | [2025-06-13T07:10:42.781+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73', protocol='range'} policy-pap | [2025-06-13T07:10:42.789+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T07:10:42.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-6d7dd9e0-ceec-417b-96be-0e880db3bf73', protocol='range'} policy-pap | [2025-06-13T07:10:42.843+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T07:10:42.846+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T07:10:42.864+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T07:10:42.879+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T07:10:43.033+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359', protocol='range'} policy-pap | [2025-06-13T07:10:43.034+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Finished assignment for group at generation 1: {consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-13T07:10:43.040+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3-b7340292-f5cb-4306-86b3-aa9dca814359', protocol='range'} policy-pap | [2025-06-13T07:10:43.041+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-13T07:10:43.041+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-13T07:10:43.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-13T07:10:43.048+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddc67c9-f8de-4e3f-9662-92c761a4150d-3, groupId=fddc67c9-f8de-4e3f-9662-92c761a4150d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-13T07:12:33.028+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-13T07:12:33.030+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"b48beb6f-bbaf-468a-8cc4-2505213f4cd6","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749798752981","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.030+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"b48beb6f-bbaf-468a-8cc4-2505213f4cd6","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749798752981","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.036+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T07:12:33.572+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:12:33.572+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:12:33.573+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:12:33.573+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=65c680c2-7580-4342-a71d-4215fb6c0c76, expireMs=1749798783573] policy-pap | [2025-06-13T07:12:33.575+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:12:33.575+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=65c680c2-7580-4342-a71d-4215fb6c0c76, expireMs=1749798783573] policy-pap | [2025-06-13T07:12:33.575+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:12:33.581+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"65c680c2-7580-4342-a71d-4215fb6c0c76","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.619+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"65c680c2-7580-4342-a71d-4215fb6c0c76","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.619+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:12:33.622+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"65c680c2-7580-4342-a71d-4215fb6c0c76","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.622+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:12:33.653+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"65c680c2-7580-4342-a71d-4215fb6c0c76","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"1363f3b3-f7ea-441b-8a2d-db94a2cbf351","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753642","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.654+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 65c680c2-7580-4342-a71d-4215fb6c0c76 policy-pap | [2025-06-13T07:12:33.655+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"65c680c2-7580-4342-a71d-4215fb6c0c76","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"1363f3b3-f7ea-441b-8a2d-db94a2cbf351","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753642","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.656+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:12:33.656+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:12:33.657+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:12:33.657+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=65c680c2-7580-4342-a71d-4215fb6c0c76, expireMs=1749798783573] policy-pap | [2025-06-13T07:12:33.657+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:12:33.657+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad start publishing next request policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange starting policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange starting listener policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange starting timer policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=8c2c5889-2f1c-4ea8-98ba-5f77f39df613, expireMs=1749798783669] policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange starting enqueue policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange started policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=8c2c5889-2f1c-4ea8-98ba-5f77f39df613, expireMs=1749798783669] policy-pap | [2025-06-13T07:12:33.669+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.671+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T07:12:33.686+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.686+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T07:12:33.692+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"9b286f27-e478-4da7-8ec7-024b02d048c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753682","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.693+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8c2c5889-2f1c-4ea8-98ba-5f77f39df613 policy-pap | [2025-06-13T07:12:33.696+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-13T07:12:33.992+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","timestampMs":1749798753550,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:33.992+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-13T07:12:33.996+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8c2c5889-2f1c-4ea8-98ba-5f77f39df613","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"9b286f27-e478-4da7-8ec7-024b02d048c7","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798753682","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:33.997+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange stopping policy-pap | [2025-06-13T07:12:33.997+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange stopping enqueue policy-pap | [2025-06-13T07:12:33.997+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange stopping timer policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=8c2c5889-2f1c-4ea8-98ba-5f77f39df613, expireMs=1749798783669] policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange stopping listener policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange stopped policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpStateChange successful policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad start publishing next request policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=ac3d88cb-5c57-4701-b0f7-3d8d8784f812, expireMs=1749798783998] policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:12:33.998+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","timestampMs":1749798753983,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:34.005+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","timestampMs":1749798753983,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:34.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:12:34.006+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","timestampMs":1749798753983,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:12:34.006+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:12:34.012+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"bb345fb7-a0ec-4fa1-8011-106aa18c38b5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798754002","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:34.012+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ac3d88cb-5c57-4701-b0f7-3d8d8784f812 policy-pap | [2025-06-13T07:12:34.014+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac3d88cb-5c57-4701-b0f7-3d8d8784f812","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"bb345fb7-a0ec-4fa1-8011-106aa18c38b5","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798754002","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:12:34.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:12:34.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:12:34.015+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:12:34.015+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ac3d88cb-5c57-4701-b0f7-3d8d8784f812, expireMs=1749798783998] policy-pap | [2025-06-13T07:12:34.016+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:12:34.016+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:12:34.023+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:12:34.023+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:12:37.792+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-13T07:13:03.574+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=65c680c2-7580-4342-a71d-4215fb6c0c76, expireMs=1749798783573] policy-pap | [2025-06-13T07:13:03.669+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=8c2c5889-2f1c-4ea8-98ba-5f77f39df613, expireMs=1749798783669] policy-pap | [2025-06-13T07:13:32.993+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"692c21fa-4a7d-4b0c-933f-0de051ddcde9","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798812978","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:13:32.993+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"692c21fa-4a7d-4b0c-933f-0de051ddcde9","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798812978","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:13:32.999+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T07:13:51.905+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-13T07:13:51.906+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-9] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-13T07:13:51.906+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy zoneB 1.0.6 policy-pap | [2025-06-13T07:13:51.907+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=1 policy-pap | [2025-06-13T07:13:51.908+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup policy-pap | [2025-06-13T07:13:51.909+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup policy-pap | [2025-06-13T07:13:51.928+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-13T07:13:51Z, user=policyadmin)] policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=8dd374a9-79bd-438c-9600-8a6cda77dc82, expireMs=1749798861958] policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=8dd374a9-79bd-438c-9600-8a6cda77dc82, expireMs=1749798861958] policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:13:51.958+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:13:51.959+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8dd374a9-79bd-438c-9600-8a6cda77dc82","timestampMs":1749798831907,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:13:51.969+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8dd374a9-79bd-438c-9600-8a6cda77dc82","timestampMs":1749798831907,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:13:51.969+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:13:51.974+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8dd374a9-79bd-438c-9600-8a6cda77dc82","timestampMs":1749798831907,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:13:51.974+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:13:52.002+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8dd374a9-79bd-438c-9600-8a6cda77dc82","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"2af62cb2-4297-4abf-a27d-0aa261166fc1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798831990","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:13:52.003+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8dd374a9-79bd-438c-9600-8a6cda77dc82 policy-pap | [2025-06-13T07:13:52.010+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8dd374a9-79bd-438c-9600-8a6cda77dc82","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"2af62cb2-4297-4abf-a27d-0aa261166fc1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798831990","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:13:52.010+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:13:52.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:13:52.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:13:52.011+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8dd374a9-79bd-438c-9600-8a6cda77dc82, expireMs=1749798861958] policy-pap | [2025-06-13T07:13:52.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:13:52.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:13:52.020+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:13:52.020+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:13:52.020+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T07:14:16.468+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-13T07:14:16.469+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-13T07:14:16.469+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy zoneB 1.0.6 policy-pap | [2025-06-13T07:14:16.469+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=0 policy-pap | [2025-06-13T07:14:16.469+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup policy-pap | [2025-06-13T07:14:16.469+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup policy-pap | [2025-06-13T07:14:16.481+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-13T07:14:16Z, user=policyadmin)] policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=392c61ad-08d0-4468-893f-238210710be1, expireMs=1749798886497] policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:14:16.497+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:14:16.498+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"392c61ad-08d0-4468-893f-238210710be1","timestampMs":1749798856469,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:16.522+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"392c61ad-08d0-4468-893f-238210710be1","timestampMs":1749798856469,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:16.523+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:16.530+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"392c61ad-08d0-4468-893f-238210710be1","timestampMs":1749798856469,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:16.530+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:16.532+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"392c61ad-08d0-4468-893f-238210710be1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"cb9c9d82-0b9f-4710-b67c-58e6f0db2328","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798856521","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"392c61ad-08d0-4468-893f-238210710be1","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"cb9c9d82-0b9f-4710-b67c-58e6f0db2328","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798856521","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 392c61ad-08d0-4468-893f-238210710be1 policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=392c61ad-08d0-4468-893f-238210710be1, expireMs=1749798886497] policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:14:16.533+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:14:16.564+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:14:16.564+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:14:16.564+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-13T07:14:16.928+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup policy-pap | [2025-06-13T07:14:16.931+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null policy-pap | [2025-06-13T07:14:16.931+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-13T07:14:17.654+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup policy-pap | [2025-06-13T07:14:17.654+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-2] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-13T07:14:17.654+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy vehicle 1.0.6 policy-pap | [2025-06-13T07:14:17.655+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=1 policy-pap | [2025-06-13T07:14:17.655+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup policy-pap | [2025-06-13T07:14:17.655+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup policy-pap | [2025-06-13T07:14:17.662+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-13T07:14:17Z, user=policyadmin)] policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=77c619d9-18d3-4e40-8dcc-5fadec588ba9, expireMs=1749798887670] policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:14:17.670+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","timestampMs":1749798857655,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:17.678+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","timestampMs":1749798857655,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:17.679+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:17.679+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","timestampMs":1749798857655,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:17.680+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:17.709+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"7a1b17c2-cc1f-445b-8d13-8f973ec4aee0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798857700","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:17.710+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 77c619d9-18d3-4e40-8dcc-5fadec588ba9 policy-pap | [2025-06-13T07:14:17.711+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"77c619d9-18d3-4e40-8dcc-5fadec588ba9","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"7a1b17c2-cc1f-445b-8d13-8f973ec4aee0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798857700","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=77c619d9-18d3-4e40-8dcc-5fadec588ba9, expireMs=1749798887670] policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:14:17.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:14:17.720+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:14:17.720+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:14:17.720+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T07:14:21.959+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=8dd374a9-79bd-438c-9600-8a6cda77dc82, expireMs=1749798861958] policy-pap | [2025-06-13T07:14:33.657+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"8253ca4b-6c58-4c20-8302-22c6df8476c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798873643","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:33.657+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"8253ca4b-6c58-4c20-8302-22c6df8476c4","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798873643","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:33.659+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-13T07:14:37.804+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy vehicle 1.0.6 policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|SessionData|http-nio-6969-exec-3] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=0 policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group opaGroup policy-pap | [2025-06-13T07:14:42.080+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group opaGroup policy-pap | [2025-06-13T07:14:42.087+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-13T07:14:42Z, user=policyadmin)] policy-pap | [2025-06-13T07:14:42.098+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:14:42.098+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:14:42.098+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:14:42.098+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=39631ade-9478-410a-bff4-af88c805ba72, expireMs=1749798912098] policy-pap | [2025-06-13T07:14:42.098+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:14:42.099+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=39631ade-9478-410a-bff4-af88c805ba72, expireMs=1749798912098] policy-pap | [2025-06-13T07:14:42.099+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"39631ade-9478-410a-bff4-af88c805ba72","timestampMs":1749798882080,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:42.099+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:14:42.107+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"39631ade-9478-410a-bff4-af88c805ba72","timestampMs":1749798882080,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:42.108+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:42.112+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"39631ade-9478-410a-bff4-af88c805ba72","timestampMs":1749798882080,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:42.112+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:42.121+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39631ade-9478-410a-bff4-af88c805ba72","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"322086b3-771a-48e8-890a-3750df1a94e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798882111","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"39631ade-9478-410a-bff4-af88c805ba72","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"322086b3-771a-48e8-890a-3750df1a94e0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798882111","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 39631ade-9478-410a-bff4-af88c805ba72 policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:14:42.122+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=39631ade-9478-410a-bff4-af88c805ba72, expireMs=1749798912098] policy-pap | [2025-06-13T07:14:42.123+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:14:42.123+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:14:42.130+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:14:42.131+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:14:42.131+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-13T07:14:42.476+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup policy-pap | [2025-06-13T07:14:42.476+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-6] failed to undeploy policy: vehicle null policy-pap | [2025-06-13T07:14:42.476+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-6] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-13T07:14:43.191+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group opaGroup policy-pap | [2025-06-13T07:14:43.191+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-5] add policy abac 1.0.7 to subgroup opaGroup opa count=2 policy-pap | [2025-06-13T07:14:43.191+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering a deploy for policy abac 1.0.7 policy-pap | [2025-06-13T07:14:43.192+00:00|INFO|SessionData|http-nio-6969-exec-5] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=1 policy-pap | [2025-06-13T07:14:43.192+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group opaGroup policy-pap | [2025-06-13T07:14:43.192+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group opaGroup policy-pap | [2025-06-13T07:14:43.197+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-13T07:14:43Z, user=policyadmin)] policy-pap | [2025-06-13T07:14:43.204+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:14:43.204+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:14:43.204+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:14:43.204+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=11344b48-77c7-4139-bbbe-69b82470a72d, expireMs=1749798913204] policy-pap | [2025-06-13T07:14:43.204+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:14:43.205+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"11344b48-77c7-4139-bbbe-69b82470a72d","timestampMs":1749798883192,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:43.205+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:14:43.212+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"11344b48-77c7-4139-bbbe-69b82470a72d","timestampMs":1749798883192,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:43.212+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:43.212+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"11344b48-77c7-4139-bbbe-69b82470a72d","timestampMs":1749798883192,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:14:43.212+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:14:43.246+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"11344b48-77c7-4139-bbbe-69b82470a72d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"3738cb0b-7fa0-4b50-af21-12b1d64dad09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798883235","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:43.246+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 11344b48-77c7-4139-bbbe-69b82470a72d policy-pap | [2025-06-13T07:14:43.247+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"11344b48-77c7-4139-bbbe-69b82470a72d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"3738cb0b-7fa0-4b50-af21-12b1d64dad09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798883235","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=11344b48-77c7-4139-bbbe-69b82470a72d, expireMs=1749798913204] policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:14:43.248+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:14:43.256+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:14:43.256+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:14:43.257+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy abac 1.0.7 policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|SessionData|http-nio-6969-exec-8] add update opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad opaGroup opa policies=0 policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group opaGroup policy-pap | [2025-06-13T07:15:07.854+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group opaGroup policy-pap | [2025-06-13T07:15:07.861+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-13T07:15:07Z, user=policyadmin)] policy-pap | [2025-06-13T07:15:07.869+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting policy-pap | [2025-06-13T07:15:07.869+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting listener policy-pap | [2025-06-13T07:15:07.869+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting timer policy-pap | [2025-06-13T07:15:07.870+00:00|INFO|TimerManager|http-nio-6969-exec-8] update timer registered Timer [name=d404f6e7-c762-4013-8b18-b201c7b9bd2b, expireMs=1749798937870] policy-pap | [2025-06-13T07:15:07.870+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate starting enqueue policy-pap | [2025-06-13T07:15:07.870+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate started policy-pap | [2025-06-13T07:15:07.870+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","timestampMs":1749798907854,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:15:07.879+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","timestampMs":1749798907854,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:15:07.882+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:15:07.882+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-2b85c803-386f-4407-9a98-f1b8a35ec4fc","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","timestampMs":1749798907854,"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-13T07:15:07.883+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-13T07:15:07.887+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"97a69133-831c-493a-a449-15747fbb9288","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798907878","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:15:07.887+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d404f6e7-c762-4013-8b18-b201c7b9bd2b policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d404f6e7-c762-4013-8b18-b201c7b9bd2b","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad","requestId":"97a69133-831c-493a-a449-15747fbb9288","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749798907878","deploymentInstanceInfo":""} policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping enqueue policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping timer policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d404f6e7-c762-4013-8b18-b201c7b9bd2b, expireMs=1749798937870] policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopping listener policy-pap | [2025-06-13T07:15:07.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate stopped policy-pap | [2025-06-13T07:15:07.897+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad PdpUpdate successful policy-pap | [2025-06-13T07:15:07.897+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-13T07:15:07.897+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-3f7f0e1c-caa5-4081-b00c-dff348db53ad has no more requests policy-pap | [2025-06-13T07:15:08.199+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-13T07:15:08.199+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-9] failed to undeploy policy: abac null policy-pap | [2025-06-13T07:15:08.200+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-9] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-13T07:15:12.098+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=39631ade-9478-410a-bff4-af88c805ba72, expireMs=1749798912098] postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-13 07:10:01.258 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 07:10:01.274 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 07:10:01.294 UTC [52] LOG: database system was shut down at 2025-06-13 07:10:00 UTC postgres | 2025-06-13 07:10:01.314 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down....2025-06-13 07:10:02.512 UTC [49] LOG: received fast shutdown request postgres | 2025-06-13 07:10:02.517 UTC [49] LOG: aborting any active transactions postgres | 2025-06-13 07:10:02.518 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-13 07:10:02.520 UTC [50] LOG: shutting down postgres | 2025-06-13 07:10:02.522 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-13 07:10:03.143 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.521 s, sync=0.080 s, total=0.623 s; sync files=1788, longest=0.021 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-13 07:10:03.157 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-13 07:10:03.241 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-13 07:10:03.241 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-13 07:10:03.242 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-13 07:10:03.248 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-13 07:10:03.259 UTC [102] LOG: database system was shut down at 2025-06-13 07:10:03 UTC postgres | 2025-06-13 07:10:03.268 UTC [1] LOG: database system is ready to accept connections postgres | 2025-06-13 07:15:03.358 UTC [100] LOG: checkpoint starting: time postgres | 2025-06-13 07:16:08.294 UTC [100] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.899 s, sync=0.029 s, total=64.936 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/3150318, redo lsn=0/314DDE0 prometheus | time=2025-06-13T07:10:02.453Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-13T07:10:02.453Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-13T07:10:02.453Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-13T07:10:02.456Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-13T07:10:02.460Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-13T07:10:02.461Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-13T07:10:02.463Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-13T07:10:02.463Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-13T07:10:02.467Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-13T07:10:02.467Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=3.53µs prometheus | time=2025-06-13T07:10:02.467Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-13T07:10:02.467Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=305.656µs prometheus | time=2025-06-13T07:10:02.467Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=34.731µs wal_replay_duration=332.097µs wbl_replay_duration=330ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=3.53µs total_replay_duration=443.129µs prometheus | time=2025-06-13T07:10:02.471Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-13T07:10:02.471Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-13T07:10:02.471Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-13T07:10:02.474Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-13T07:10:02.474Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.27µs remote_storage=1.99µs web_handler=1.07µs query_engine=1.14µs scrape=219.804µs scrape_sd=196.894µs notify=278.656µs notify_sd=18.02µs rules=1.97µs tracing=4.051µs filename=/etc/prometheus/prometheus.yml totalDuration=2.654571ms prometheus | time=2025-06-13T07:10:02.474Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-13T07:10:02.474Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-13 07:10:07,618] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,621] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,621] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,621] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,621] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,622] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 07:10:07,622] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 07:10:07,622] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-13 07:10:07,623] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-13 07:10:07,624] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-13 07:10:07,624] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,624] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,624] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,624] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,624] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-13 07:10:07,624] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-13 07:10:07,635] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-13 07:10:07,637] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 07:10:07,637] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-13 07:10:07,639] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 07:10:07,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,649] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,649] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,649] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,649] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,650] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,650] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,650] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,650] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,650] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,651] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,652] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-13 07:10:07,653] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,653] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,654] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 07:10:07,655] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,655] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-13 07:10:07,657] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,657] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,658] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 07:10:07,658] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-13 07:10:07,658] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,681] INFO Logging initialized @414ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-13 07:10:07,748] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 07:10:07,748] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 07:10:07,775] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 07:10:07,811] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 07:10:07,811] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 07:10:07,812] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-13 07:10:07,814] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-13 07:10:07,823] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-13 07:10:07,835] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-13 07:10:07,836] INFO Started @575ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-13 07:10:07,836] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-13 07:10:07,839] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 07:10:07,840] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-13 07:10:07,842] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 07:10:07,843] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-13 07:10:07,858] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 07:10:07,858] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-13 07:10:07,858] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 07:10:07,858] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 07:10:07,863] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-13 07:10:07,863] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 07:10:07,865] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-13 07:10:07,866] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-13 07:10:07,866] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-13 07:10:07,876] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-13 07:10:07,876] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-13 07:10:07,891] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-13 07:10:07,891] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-13 07:10:09,091] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container grafana Stopping Container policy-opa-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-opa-pdp Stopped Container policy-opa-pdp Removing Container policy-opa-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2111 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6329987496612180933.sh ---> sysstat.sh [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15386446783799965519.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins17380402539682064220.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-vXym from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-vXym/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15897830481019614708.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config7676118078976797328tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins9063146697064511856.sh ---> create-netrc.sh [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins8163887239805888211.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-vXym from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-vXym/bin to PATH [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins11002158785395064892.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins11304029681685442021.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-vXym from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-vXym/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins2601181862631150529.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-vXym from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-vXym/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/158 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-20782 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.996 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 880 24055 0 7231 30831 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:ba:ff:92 brd ff:ff:ff:ff:ff:ff inet 10.30.106.13/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85826sec preferred_lft 85826sec inet6 fe80::f816:3eff:feba:ff92/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b8:90:c8:b3 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:b8ff:fe90:c8b3/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20782) 06/13/25 _x86_64_ (8 CPU) 07:07:54 LINUX RESTART (8 CPU) 07:08:01 tps rtps wtps bread/s bwrtn/s 07:09:01 400.77 74.04 326.73 5346.84 118900.72 07:10:02 537.86 22.26 515.60 2642.49 261365.77 07:11:01 245.65 0.15 245.50 10.17 11836.50 07:12:01 5.50 0.00 5.50 0.00 117.05 07:13:01 5.27 0.02 5.25 0.13 127.45 07:14:01 221.88 0.37 221.51 36.93 33889.69 07:15:01 6.78 0.00 6.78 0.00 158.37 07:16:01 10.85 0.00 10.85 0.00 285.15 07:17:01 58.56 2.28 56.27 124.51 1267.12 Average: 165.75 11.03 154.72 908.45 47616.00 07:08:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 07:09:01 30082032 31613284 2857188 8.67 67152 1775328 1510940 4.45 938840 1634200 156184 07:10:02 25019344 31458520 7919876 24.04 150124 6418092 3909660 11.50 1206680 6125996 8136 07:11:01 23363148 30031308 9576072 29.07 163620 6598372 7382920 21.72 2839832 6094236 236 07:12:01 23345336 29998928 9593884 29.13 163776 6584864 7621700 22.42 2870440 6079284 52 07:13:01 23317324 29972392 9621896 29.21 164008 6586044 7681212 22.60 2898852 6077984 184 07:14:01 22702308 29892556 10236912 31.08 199992 7031428 7993700 23.52 3090160 6442484 2220 07:15:01 22690624 29881988 10248596 31.11 200124 7032040 7954324 23.40 3106036 6437160 696 07:16:01 22684140 29875944 10255080 31.13 200268 7032232 7988544 23.50 3112564 6436820 520 07:17:01 24634988 31569428 8304232 25.21 201696 6769368 1620972 4.77 1481720 6193072 27464 Average: 24204360 30477150 8734860 26.52 167862 6203085 5962664 17.54 2393903 5724582 21744 07:08:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 07:09:01 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 07:09:01 ens3 574.45 363.32 1685.57 82.81 0.00 0.00 0.00 0.00 07:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:10:02 vethc2a7889 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:10:02 br-458f093f3ca7 0.03 0.12 0.00 0.01 0.00 0.00 0.00 0.00 07:10:02 veth090a393 0.03 0.05 0.00 0.00 0.00 0.00 0.00 0.00 07:10:02 lo 13.60 13.60 1.25 1.25 0.00 0.00 0.00 0.00 07:11:01 br-458f093f3ca7 31.88 42.70 1.99 316.12 0.00 0.00 0.00 0.00 07:11:01 veth090a393 93.36 93.10 16.31 18.94 0.00 0.00 0.00 0.00 07:11:01 lo 1.15 1.15 0.09 0.09 0.00 0.00 0.00 0.00 07:11:01 ens3 2225.00 1271.19 35791.15 160.41 0.00 0.00 0.00 0.00 07:12:01 br-458f093f3ca7 0.52 0.38 0.03 0.02 0.00 0.00 0.00 0.00 07:12:01 veth090a393 0.22 0.23 0.55 0.02 0.00 0.00 0.00 0.00 07:12:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 07:12:01 ens3 1.03 1.03 0.12 0.29 0.00 0.00 0.00 0.00 07:13:01 br-458f093f3ca7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:13:01 veth090a393 0.22 0.22 0.55 0.02 0.00 0.00 0.00 0.00 07:13:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 07:13:01 ens3 1.17 0.85 0.52 0.70 0.00 0.00 0.00 0.00 07:14:01 br-458f093f3ca7 0.23 0.25 0.02 0.02 0.00 0.00 0.00 0.00 07:14:01 veth090a393 33.31 33.24 4.28 8.22 0.00 0.00 0.00 0.00 07:14:01 lo 2.53 2.53 0.21 0.21 0.00 0.00 0.00 0.00 07:14:01 ens3 220.88 139.83 2195.84 11.54 0.00 0.00 0.00 0.00 07:15:01 br-458f093f3ca7 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:15:01 veth090a393 68.61 68.34 8.31 16.68 0.00 0.00 0.00 0.00 07:15:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 07:15:01 ens3 0.57 0.53 0.06 0.28 0.00 0.00 0.00 0.00 07:16:01 br-458f093f3ca7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:16:01 veth090a393 35.13 34.98 4.32 8.46 0.00 0.00 0.00 0.00 07:16:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 07:16:01 ens3 1.23 0.98 0.27 0.50 0.00 0.00 0.00 0.00 07:17:01 lo 2.87 2.87 0.27 0.27 0.00 0.00 0.00 0.00 07:17:01 ens3 64.61 48.78 71.31 33.58 0.00 0.00 0.00 0.00 07:17:01 docker0 110.70 162.56 7.39 1347.68 0.00 0.00 0.00 0.00 Average: lo 3.01 3.01 0.27 0.27 0.00 0.00 0.00 0.00 Average: ens3 274.96 159.98 4169.62 22.73 0.00 0.00 0.00 0.00 Average: docker0 12.32 18.10 0.82 150.02 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20782) 06/13/25 _x86_64_ (8 CPU) 07:07:54 LINUX RESTART (8 CPU) 07:08:01 CPU %user %nice %system %iowait %steal %idle 07:09:01 all 9.67 0.00 1.33 2.86 0.03 86.11 07:09:01 0 12.17 0.00 1.35 0.88 0.03 85.56 07:09:01 1 5.96 0.00 2.19 7.27 0.03 84.54 07:09:01 2 5.75 0.00 0.72 0.48 0.02 93.03 07:09:01 3 5.52 0.00 0.79 2.48 0.05 91.16 07:09:01 4 3.91 0.00 0.33 9.94 0.03 85.78 07:09:01 5 2.99 0.00 0.91 0.43 0.02 95.65 07:09:01 6 7.77 0.00 1.13 0.28 0.03 90.78 07:09:01 7 33.32 0.00 3.16 1.19 0.07 62.27 07:10:02 all 18.14 0.00 6.89 6.70 0.08 68.19 07:10:02 0 15.87 0.00 7.04 12.09 0.08 64.92 07:10:02 1 14.27 0.00 6.53 8.83 0.08 70.28 07:10:02 2 13.57 0.00 6.66 3.44 0.08 76.25 07:10:02 3 15.15 0.00 6.91 6.09 0.07 71.78 07:10:02 4 15.63 0.00 7.11 13.75 0.10 63.41 07:10:02 5 15.57 0.00 6.86 1.21 0.07 76.30 07:10:02 6 34.45 0.00 7.11 1.04 0.10 57.29 07:10:02 7 20.52 0.00 6.92 7.17 0.08 65.30 07:11:01 all 25.47 0.00 3.01 0.89 0.09 70.54 07:11:01 0 23.21 0.00 3.05 1.12 0.09 72.54 07:11:01 1 32.11 0.00 3.58 0.36 0.10 63.84 07:11:01 2 33.69 0.00 3.32 1.87 0.12 61.00 07:11:01 3 24.39 0.00 2.98 1.31 0.10 71.21 07:11:01 4 26.29 0.00 2.83 0.27 0.09 70.52 07:11:01 5 19.96 0.00 2.84 0.31 0.09 76.81 07:11:01 6 21.30 0.00 2.54 0.60 0.09 75.48 07:11:01 7 22.85 0.00 2.90 1.28 0.07 72.91 07:12:01 all 0.88 0.00 0.14 0.92 0.53 97.53 07:12:01 0 0.67 0.00 0.23 0.02 0.03 99.05 07:12:01 1 0.46 0.00 0.15 0.52 0.07 98.80 07:12:01 2 0.61 0.00 0.14 0.05 0.05 99.15 07:12:01 3 0.73 0.00 0.14 0.00 0.02 99.12 07:12:01 4 2.08 0.00 0.13 0.00 0.07 97.72 07:12:01 5 0.88 0.00 0.12 1.30 2.00 95.70 07:12:01 6 0.54 0.00 0.10 0.00 0.07 99.29 07:12:01 7 1.02 0.00 0.13 5.32 1.93 91.60 07:13:01 all 2.01 0.00 0.29 0.02 0.02 97.67 07:13:01 0 1.94 0.00 0.25 0.02 0.03 97.76 07:13:01 1 2.75 0.00 0.27 0.00 0.03 96.95 07:13:01 2 1.22 0.00 0.15 0.02 0.00 98.62 07:13:01 3 1.92 0.00 0.27 0.02 0.03 97.76 07:13:01 4 3.29 0.00 0.20 0.00 0.02 96.49 07:13:01 5 1.37 0.00 0.30 0.02 0.02 98.30 07:13:01 6 1.57 0.00 0.43 0.02 0.03 97.95 07:13:01 7 1.98 0.00 0.42 0.08 0.02 97.50 07:14:01 all 9.20 0.00 2.56 0.94 0.06 87.25 07:14:01 0 4.70 0.00 2.00 1.86 0.05 91.40 07:14:01 1 8.18 0.00 2.91 1.87 0.05 86.99 07:14:01 2 7.48 0.00 2.89 0.75 0.05 88.83 07:14:01 3 17.60 0.00 3.05 0.35 0.07 78.92 07:14:01 4 7.29 0.00 1.29 0.02 0.05 91.36 07:14:01 5 9.52 0.00 3.29 2.53 0.07 84.59 07:14:01 6 9.07 0.00 2.50 0.12 0.07 88.25 07:14:01 7 9.75 0.00 2.58 0.02 0.05 87.61 07:15:01 all 2.99 0.00 0.55 0.43 0.04 96.00 07:15:01 0 3.66 0.00 0.37 0.00 0.03 95.94 07:15:01 1 4.88 0.00 0.60 0.00 0.05 94.47 07:15:01 2 2.52 0.00 0.45 0.00 0.03 96.99 07:15:01 3 3.71 0.00 0.60 0.00 0.02 95.67 07:15:01 4 2.95 0.00 0.62 0.02 0.03 96.38 07:15:01 5 1.82 0.00 0.42 3.36 0.03 94.37 07:15:01 6 2.13 0.00 0.74 0.03 0.05 97.05 07:15:01 7 2.21 0.00 0.58 0.05 0.03 97.13 07:16:01 all 1.24 0.00 0.23 0.16 0.03 98.34 07:16:01 0 0.85 0.00 0.22 0.00 0.02 98.91 07:16:01 1 2.05 0.00 0.22 0.02 0.02 97.70 07:16:01 2 0.89 0.00 0.27 0.00 0.03 98.81 07:16:01 3 1.73 0.00 0.17 0.02 0.05 98.03 07:16:01 4 0.95 0.00 0.20 0.02 0.02 98.81 07:16:01 5 1.47 0.00 0.27 1.15 0.05 97.06 07:16:01 6 0.65 0.00 0.22 0.03 0.05 99.05 07:16:01 7 1.39 0.00 0.27 0.02 0.03 98.30 07:17:01 all 4.42 0.00 0.82 0.18 0.03 94.55 07:17:01 0 1.90 0.00 0.77 0.03 0.02 97.28 07:17:01 1 1.25 0.00 0.70 0.10 0.03 97.91 07:17:01 2 13.16 0.00 1.10 0.12 0.05 85.57 07:17:01 3 4.32 0.00 1.02 0.10 0.02 94.55 07:17:01 4 4.56 0.00 0.77 0.65 0.02 94.01 07:17:01 5 1.99 0.00 0.68 0.10 0.02 97.21 07:17:01 6 2.95 0.00 0.80 0.28 0.03 95.93 07:17:01 7 5.25 0.00 0.75 0.10 0.03 93.86 Average: all 8.18 0.00 1.75 1.45 0.10 88.52 Average: 0 7.17 0.00 1.69 1.77 0.04 89.34 Average: 1 7.94 0.00 1.90 2.11 0.05 88.00 Average: 2 8.71 0.00 1.73 0.74 0.05 88.76 Average: 3 8.30 0.00 1.76 1.15 0.05 88.74 Average: 4 7.38 0.00 1.49 2.73 0.05 88.35 Average: 5 6.14 0.00 1.74 1.16 0.26 90.71 Average: 6 8.91 0.00 1.73 0.27 0.06 89.04 Average: 7 10.87 0.00 1.96 1.69 0.26 85.22