Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141338 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22131 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-gutrCvLBoubY/agent.2074 SSH_AGENT_PID=2076 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_857673937961227561.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_857673937961227561.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/38/141338/2 # timeout=30 > git rev-parse a4383ddb08daf12bc481139efd90352bfa803726^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision a4383ddb08daf12bc481139efd90352bfa803726 (refs/changes/38/141338/2) > git config core.sparsecheckout # timeout=10 > git checkout -f a4383ddb08daf12bc481139efd90352bfa803726 # timeout=30 Commit message: "Fix CSIT Helm kafka installation" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins17042874026144206003.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-OWna lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-OWna/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-OWna/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.38 botocore==1.38.38 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.0 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins15448608035449911583.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins9426350946259095002.sh + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 92 60.2M 92 55.7M 0 0 68.5M 0 --:--:-- --:--:-- --:--:-- 68.5M 100 60.2M 100 60.2M 0 0 70.5M 0 --:--:-- --:--:-- --:--:-- 110M Setting project configuration for: opa-pdp Configuring docker compose... Starting opa-pdp using postgres + Grafana/Prometheus policy-db-migrator Pulling prometheus Pulling opa-pdp Pulling pap Pulling grafana Pulling postgres Pulling kafka Pulling api Pulling zookeeper Pulling da9db072f522 Pulling fs layer 110a13bd01fb Pulling fs layer 12cf1ed9c784 Pulling fs layer d4108afce2f7 Pulling fs layer 07255172bfd8 Pulling fs layer 22c948928e79 Pulling fs layer e92d65bf8445 Pulling fs layer 7910fddefabc Pulling fs layer 22c948928e79 Waiting d4108afce2f7 Waiting 07255172bfd8 Waiting e92d65bf8445 Waiting 7910fddefabc Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 96e38c8865ba Waiting 5e06c6bed798 Waiting 684be6598fc9 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer 1ec5fb03eaee Waiting c124ba1a8b26 Waiting d3165a332ae3 Waiting e5d7009d9e55 Waiting 96e38c8865ba Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 12cf1ed9c784 Downloading [> ] 146.4kB/14.64MB f90c8eb4724c Pulling fs layer 2b1b549e99de Pulling fs layer 547372ea8ffa Pulling fs layer 65d25c0f02f3 Pulling fs layer 90dd78f85976 Pulling fs layer 4f4fb700ef54 Pulling fs layer f90c8eb4724c Waiting 2b1b549e99de Waiting 547372ea8ffa Waiting 4f4fb700ef54 Waiting 90dd78f85976 Waiting 65d25c0f02f3 Waiting 110a13bd01fb Downloading [> ] 539.6kB/71.86MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 9fa9226be034 Waiting 1617e25568b2 Waiting 6ac0e4adf315 Waiting f3b09c502777 Waiting 408012a7b118 Waiting 44986281b8b9 Waiting 7df673c7455d Waiting bf70c5107ab5 Waiting 7221d93db8a9 Waiting 1ccde423731d Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer e444bcd4d577 Waiting 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer 10f05dd8b1db Pulling fs layer 45fd2fec8a19 Waiting 41dac8b43ba6 Pulling fs layer 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer 8f10199ed94b Waiting c955f6e31a04 Pulling fs layer f963a77d2726 Waiting 41dac8b43ba6 Waiting f3a82e9f1761 Waiting 71a9f6a9ab4d Waiting 79161a3f5362 Waiting da3ed5db7103 Waiting 9c266ba63f51 Waiting c955f6e31a04 Waiting 2e8a7df9c2ee Waiting eca0188f477e Waiting 10f05dd8b1db Waiting f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 807a2e881ecd Waiting 3f8d5c908dcc Waiting 9183b65e90ee Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 538deb30e80c Waiting 7009d5001b77 Waiting 30bb92ff0608 Waiting 4a4d0948b0bf Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer c4d302cc468d Waiting a83b68436f09 Pulling fs layer 01e0882c90d9 Waiting 46eab5b44a35 Waiting 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 12c5c803443f Waiting e27c75a98748 Waiting 4b82842ab819 Waiting e73cb4a42719 Waiting 7e568a0dc8fb Waiting ed54a7dee1d8 Waiting 787d6bee9571 Waiting f18232174bc9 Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 1e017ebebdbd Waiting 55f2b468da67 Waiting 82bfc142787e Waiting 46baca71a4ef Waiting 40a5eed61bb0 Waiting b0e0ef7895f4 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting c0c90eeb8aca Waiting 356f5c2c843b Waiting 5cfb27c10ea5 Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB d4108afce2f7 Downloading [==================================================>] 1.073kB/1.073kB d4108afce2f7 Download complete 12cf1ed9c784 Downloading [=====================> ] 6.192MB/14.64MB 07255172bfd8 Downloading [============================> ] 3.003kB/5.24kB 07255172bfd8 Downloading [==================================================>] 5.24kB/5.24kB 07255172bfd8 Verifying Checksum 110a13bd01fb Downloading [=====> ] 8.109MB/71.86MB 22c948928e79 Downloading [==================================================>] 1.031kB/1.031kB 22c948928e79 Download complete e92d65bf8445 Downloading [==================================================>] 1.034kB/1.034kB e92d65bf8445 Download complete 7910fddefabc Downloading [=======> ] 3.002kB/19.51kB 7910fddefabc Download complete 12cf1ed9c784 Verifying Checksum 12cf1ed9c784 Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 110a13bd01fb Downloading [==============> ] 20.54MB/71.86MB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete 96e38c8865ba Downloading [=====> ] 8.109MB/71.91MB 96e38c8865ba Downloading [=====> ] 8.109MB/71.91MB 110a13bd01fb Downloading [========================> ] 35.14MB/71.86MB dcc0c3b2850c Downloading [===> ] 5.946MB/76.12MB 96e38c8865ba Downloading [============> ] 17.84MB/71.91MB 96e38c8865ba Downloading [============> ] 17.84MB/71.91MB 110a13bd01fb Downloading [===================================> ] 50.82MB/71.86MB dcc0c3b2850c Downloading [=========> ] 15.14MB/76.12MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 96e38c8865ba Downloading [====================> ] 29.2MB/71.91MB 110a13bd01fb Downloading [===============================================> ] 67.58MB/71.86MB 110a13bd01fb Verifying Checksum 110a13bd01fb Download complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Download complete dcc0c3b2850c Downloading [================> ] 25.41MB/76.12MB e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Download complete 96e38c8865ba Downloading [============================> ] 40.55MB/71.91MB 96e38c8865ba Downloading [============================> ] 40.55MB/71.91MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete 110a13bd01fb Extracting [> ] 557.1kB/71.86MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB dcc0c3b2850c Downloading [===========================> ] 42.17MB/76.12MB 96e38c8865ba Downloading [====================================> ] 51.9MB/71.91MB 96e38c8865ba Downloading [====================================> ] 51.9MB/71.91MB c124ba1a8b26 Downloading [==> ] 4.865MB/91.87MB 110a13bd01fb Extracting [===> ] 4.456MB/71.86MB dcc0c3b2850c Downloading [=======================================> ] 60.01MB/76.12MB 96e38c8865ba Downloading [===========================================> ] 63.26MB/71.91MB 96e38c8865ba Downloading [===========================================> ] 63.26MB/71.91MB c124ba1a8b26 Downloading [=====> ] 10.27MB/91.87MB 110a13bd01fb Extracting [=======> ] 10.58MB/71.86MB dcc0c3b2850c Downloading [=================================================> ] 75.69MB/76.12MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 2b1b549e99de Downloading [> ] 31.67kB/2.646MB c124ba1a8b26 Downloading [=============> ] 24.87MB/91.87MB 110a13bd01fb Extracting [===========> ] 16.15MB/71.86MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 2b1b549e99de Verifying Checksum 2b1b549e99de Download complete f90c8eb4724c Downloading [======> ] 4.046MB/30.59MB 547372ea8ffa Downloading [> ] 130kB/12.63MB c124ba1a8b26 Downloading [======================> ] 42.17MB/91.87MB 110a13bd01fb Extracting [==============> ] 21.17MB/71.86MB f90c8eb4724c Downloading [====================> ] 12.45MB/30.59MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 547372ea8ffa Downloading [===================> ] 4.98MB/12.63MB c124ba1a8b26 Downloading [================================> ] 60.55MB/91.87MB 110a13bd01fb Extracting [==================> ] 26.74MB/71.86MB f90c8eb4724c Downloading [=====================================> ] 22.72MB/30.59MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 547372ea8ffa Downloading [=================================================> ] 12.45MB/12.63MB 547372ea8ffa Verifying Checksum 547372ea8ffa Download complete c124ba1a8b26 Downloading [=========================================> ] 76.77MB/91.87MB 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 110a13bd01fb Extracting [=====================> ] 30.64MB/71.86MB f90c8eb4724c Verifying Checksum f90c8eb4724c Download complete 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 4f4fb700ef54 Downloading [==================================================>] 32B/32B 4f4fb700ef54 Verifying Checksum 4f4fb700ef54 Download complete 65d25c0f02f3 Downloading [===========> ] 6.487MB/28.98MB 110a13bd01fb Extracting [=======================> ] 33.98MB/71.86MB 9fa9226be034 Downloading [> ] 15.3kB/783kB f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 90dd78f85976 Downloading [=========> ] 7.667MB/41.49MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 110a13bd01fb Extracting [=========================> ] 37.32MB/71.86MB 65d25c0f02f3 Downloading [=================================> ] 19.46MB/28.98MB f90c8eb4724c Extracting [======> ] 4.26MB/30.59MB 90dd78f85976 Downloading [======================> ] 18.74MB/41.49MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 96e38c8865ba Extracting [=============> ] 20.05MB/71.91MB 65d25c0f02f3 Verifying Checksum 65d25c0f02f3 Download complete 6ac0e4adf315 Downloading [=====> ] 6.487MB/62.07MB 110a13bd01fb Extracting [===========================> ] 40.11MB/71.86MB f90c8eb4724c Extracting [============> ] 7.537MB/30.59MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 9fa9226be034 Pull complete 90dd78f85976 Downloading [===================================> ] 29.82MB/41.49MB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 6ac0e4adf315 Downloading [===========> ] 14.6MB/62.07MB 90dd78f85976 Verifying Checksum 90dd78f85976 Download complete f90c8eb4724c Extracting [===============> ] 9.503MB/30.59MB f3b09c502777 Downloading [=====> ] 5.946MB/56.52MB 110a13bd01fb Extracting [==============================> ] 44.01MB/71.86MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Download complete 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Verifying Checksum bf70c5107ab5 Download complete 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 6ac0e4adf315 Downloading [=====================> ] 26.49MB/62.07MB 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 96e38c8865ba Extracting [==================> ] 26.74MB/71.91MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Downloading [==================================================>] 61.44kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete f90c8eb4724c Extracting [====================> ] 12.78MB/30.59MB f3b09c502777 Downloading [===============> ] 17.3MB/56.52MB 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete 110a13bd01fb Extracting [================================> ] 46.79MB/71.86MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB eca0188f477e Downloading [> ] 375.7kB/37.17MB 6ac0e4adf315 Downloading [================================> ] 40.01MB/62.07MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB 96e38c8865ba Extracting [====================> ] 30.08MB/71.91MB f3b09c502777 Downloading [=========================> ] 28.65MB/56.52MB f90c8eb4724c Extracting [===========================> ] 16.71MB/30.59MB 110a13bd01fb Extracting [==================================> ] 49.58MB/71.86MB 6ac0e4adf315 Downloading [========================================> ] 49.74MB/62.07MB eca0188f477e Downloading [===> ] 2.26MB/37.17MB 1617e25568b2 Pull complete 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB f3b09c502777 Downloading [==================================> ] 38.93MB/56.52MB f90c8eb4724c Extracting [================================> ] 19.99MB/30.59MB 110a13bd01fb Extracting [====================================> ] 52.36MB/71.86MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [==============> ] 10.55MB/37.17MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.21MB/71.91MB f3b09c502777 Downloading [==============================================> ] 52.44MB/56.52MB eabd8714fec9 Downloading [> ] 539.6kB/375MB f90c8eb4724c Extracting [======================================> ] 23.59MB/30.59MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 110a13bd01fb Extracting [=======================================> ] 56.26MB/71.86MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eca0188f477e Downloading [=============================> ] 21.86MB/37.17MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB 96e38c8865ba Extracting [===========================> ] 38.99MB/71.91MB eabd8714fec9 Downloading [> ] 7.028MB/375MB eca0188f477e Download complete 110a13bd01fb Extracting [=========================================> ] 60.16MB/71.86MB 8f10199ed94b Downloading [====================================> ] 6.389MB/8.768MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Download complete 6ac0e4adf315 Extracting [===> ] 4.456MB/62.07MB 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 96e38c8865ba Extracting [============================> ] 40.67MB/71.91MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB eabd8714fec9 Downloading [==> ] 18.38MB/375MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Download complete 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 41dac8b43ba6 Download complete eca0188f477e Extracting [> ] 393.2kB/37.17MB 110a13bd01fb Extracting [=============================================> ] 65.73MB/71.86MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete f90c8eb4724c Extracting [==============================================> ] 28.51MB/30.59MB f3a82e9f1761 Downloading [====> ] 4.128MB/44.41MB eabd8714fec9 Downloading [====> ] 32.44MB/375MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.89MB/71.91MB 6ac0e4adf315 Extracting [====> ] 6.128MB/62.07MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB eca0188f477e Extracting [=====> ] 3.932MB/37.17MB 110a13bd01fb Extracting [================================================> ] 69.07MB/71.86MB f3a82e9f1761 Downloading [=========> ] 8.256MB/44.41MB eabd8714fec9 Downloading [=====> ] 43.79MB/375MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 96e38c8865ba Extracting [===============================> ] 45.68MB/71.91MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB da3ed5db7103 Downloading [> ] 2.162MB/127.4MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 110a13bd01fb Extracting [==================================================>] 71.86MB/71.86MB f3a82e9f1761 Downloading [=============> ] 12.39MB/44.41MB eabd8714fec9 Downloading [=======> ] 56.77MB/375MB 6ac0e4adf315 Extracting [========> ] 10.58MB/62.07MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB da3ed5db7103 Downloading [=> ] 3.784MB/127.4MB f3a82e9f1761 Downloading [==================> ] 16.51MB/44.41MB eabd8714fec9 Downloading [=========> ] 69.75MB/375MB 110a13bd01fb Pull complete 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB f90c8eb4724c Pull complete eca0188f477e Extracting [=============> ] 9.83MB/37.17MB 6ac0e4adf315 Extracting [==========> ] 13.37MB/62.07MB 12cf1ed9c784 Extracting [> ] 163.8kB/14.64MB 2b1b549e99de Extracting [> ] 32.77kB/2.646MB da3ed5db7103 Downloading [==> ] 5.946MB/127.4MB eabd8714fec9 Downloading [==========> ] 82.18MB/375MB f3a82e9f1761 Downloading [========================> ] 22.02MB/44.41MB eca0188f477e Extracting [=================> ] 13.37MB/37.17MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 12cf1ed9c784 Extracting [=> ] 491.5kB/14.64MB 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB da3ed5db7103 Downloading [===> ] 8.65MB/127.4MB eabd8714fec9 Downloading [============> ] 95.16MB/375MB f3a82e9f1761 Downloading [=============================> ] 26.61MB/44.41MB eca0188f477e Extracting [=====================> ] 15.73MB/37.17MB 12cf1ed9c784 Extracting [==============> ] 4.26MB/14.64MB 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 2b1b549e99de Extracting [==========================================> ] 2.228MB/2.646MB 6ac0e4adf315 Extracting [==============> ] 17.83MB/62.07MB da3ed5db7103 Downloading [====> ] 12.43MB/127.4MB 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB eabd8714fec9 Downloading [==============> ] 106MB/375MB f3a82e9f1761 Downloading [===================================> ] 31.19MB/44.41MB eca0188f477e Extracting [=========================> ] 19.27MB/37.17MB 12cf1ed9c784 Extracting [===================> ] 5.571MB/14.64MB 2b1b549e99de Pull complete 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 6ac0e4adf315 Extracting [=================> ] 21.73MB/62.07MB da3ed5db7103 Downloading [======> ] 17.3MB/127.4MB eabd8714fec9 Downloading [===============> ] 116.8MB/375MB f3a82e9f1761 Downloading [=========================================> ] 36.7MB/44.41MB eca0188f477e Extracting [===============================> ] 23.59MB/37.17MB 12cf1ed9c784 Extracting [==========================> ] 7.864MB/14.64MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB da3ed5db7103 Downloading [=========> ] 23.25MB/127.4MB 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB eabd8714fec9 Downloading [=================> ] 134.1MB/375MB 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB f3a82e9f1761 Downloading [===============================================> ] 42.2MB/44.41MB eca0188f477e Extracting [===================================> ] 26.35MB/37.17MB 12cf1ed9c784 Extracting [=============================> ] 8.52MB/14.64MB da3ed5db7103 Downloading [===========> ] 30.28MB/127.4MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete 547372ea8ffa Extracting [=============> ] 3.408MB/12.63MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete eabd8714fec9 Downloading [===================> ] 144.4MB/375MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB eca0188f477e Extracting [========================================> ] 30.28MB/37.17MB 12cf1ed9c784 Extracting [======================================> ] 11.14MB/14.64MB da3ed5db7103 Downloading [===============> ] 38.39MB/127.4MB 547372ea8ffa Extracting [=======================> ] 6.029MB/12.63MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB 96e38c8865ba Extracting [================================================> ] 70.19MB/71.91MB eabd8714fec9 Downloading [=====================> ] 159.5MB/375MB f18232174bc9 Downloading [=============================================> ] 3.292MB/3.642MB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB eca0188f477e Extracting [============================================> ] 33.42MB/37.17MB 12cf1ed9c784 Extracting [========================================> ] 11.96MB/14.64MB da3ed5db7103 Downloading [==================> ] 47.58MB/127.4MB 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 547372ea8ffa Extracting [==============================> ] 7.733MB/12.63MB 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB eabd8714fec9 Downloading [======================> ] 172.5MB/375MB 6ac0e4adf315 Extracting [===========================> ] 34.54MB/62.07MB 12cf1ed9c784 Extracting [==================================================>] 14.64MB/14.64MB eca0188f477e Extracting [==============================================> ] 34.21MB/37.17MB da3ed5db7103 Downloading [=======================> ] 59.47MB/127.4MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete 547372ea8ffa Extracting [===================================> ] 9.044MB/12.63MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 12cf1ed9c784 Pull complete eabd8714fec9 Downloading [========================> ] 184.4MB/375MB d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB d4108afce2f7 Extracting [==================================================>] 1.073kB/1.073kB 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 6ac0e4adf315 Extracting [================================> ] 40.11MB/62.07MB da3ed5db7103 Downloading [===========================> ] 71.37MB/127.4MB eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB 547372ea8ffa Extracting [==============================================> ] 11.67MB/12.63MB f18232174bc9 Extracting [=================================> ] 2.425MB/3.642MB eabd8714fec9 Downloading [=========================> ] 194.1MB/375MB 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 30bb92ff0608 Downloading [==========> ] 1.768MB/8.735MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 6ac0e4adf315 Extracting [========================================> ] 50.14MB/62.07MB 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB da3ed5db7103 Downloading [=================================> ] 85.97MB/127.4MB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB eabd8714fec9 Downloading [===========================> ] 208.2MB/375MB d4108afce2f7 Pull complete 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 547372ea8ffa Pull complete 07255172bfd8 Extracting [==================================================>] 5.24kB/5.24kB 30bb92ff0608 Downloading [==============================> ] 5.406MB/8.735MB 6ac0e4adf315 Extracting [=============================================> ] 56.82MB/62.07MB eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B f18232174bc9 Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B 9183b65e90ee Extracting [==================================================>] 141B/141B da3ed5db7103 Downloading [======================================> ] 98.94MB/127.4MB 9183b65e90ee Extracting [==================================================>] 141B/141B 684be6598fc9 Pull complete 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 807a2e881ecd Verifying Checksum 807a2e881ecd Download complete eabd8714fec9 Downloading [=============================> ] 219.5MB/375MB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete 1ec5fb03eaee Pull complete 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 04f6155c873d Downloading [> ] 539.6kB/107.3MB 07255172bfd8 Pull complete 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB da3ed5db7103 Downloading [===========================================> ] 110.8MB/127.4MB 22c948928e79 Extracting [==================================================>] 1.031kB/1.031kB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB e444bcd4d577 Pull complete eabd8714fec9 Downloading [===============================> ] 235.2MB/375MB 0d92cad902ba Pull complete 9183b65e90ee Pull complete 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 04f6155c873d Downloading [==> ] 4.865MB/107.3MB 65d25c0f02f3 Extracting [======> ] 3.539MB/28.98MB da3ed5db7103 Downloading [================================================> ] 123.8MB/127.4MB d3165a332ae3 Pull complete 6ac0e4adf315 Pull complete da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB eabd8714fec9 Downloading [=================================> ] 250.9MB/375MB 22c948928e79 Pull complete e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB e92d65bf8445 Extracting [==================================================>] 1.034kB/1.034kB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 3f8d5c908dcc Extracting [====> ] 327.7kB/3.524MB 04f6155c873d Downloading [======> ] 12.98MB/107.3MB 65d25c0f02f3 Extracting [==========> ] 6.193MB/28.98MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB dcc0c3b2850c Extracting [=====> ] 7.799MB/76.12MB eabd8714fec9 Downloading [===================================> ] 264.4MB/375MB 85dde7dceb0a Downloading [===> ] 4.865MB/63.48MB 3f8d5c908dcc Extracting [=============================================> ] 3.211MB/3.524MB 04f6155c873d Downloading [==========> ] 22.71MB/107.3MB 65d25c0f02f3 Extracting [===============> ] 8.847MB/28.98MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB f3b09c502777 Extracting [==> ] 2.785MB/56.52MB c124ba1a8b26 Extracting [====> ] 7.799MB/91.87MB eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB dcc0c3b2850c Extracting [========> ] 12.81MB/76.12MB 85dde7dceb0a Downloading [=======> ] 9.731MB/63.48MB e92d65bf8445 Pull complete 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 7910fddefabc Extracting [==================================================>] 19.51kB/19.51kB 04f6155c873d Downloading [================> ] 34.6MB/107.3MB 65d25c0f02f3 Extracting [==================> ] 10.91MB/28.98MB 3f8d5c908dcc Pull complete 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB eabd8714fec9 Downloading [======================================> ] 286.6MB/375MB c124ba1a8b26 Extracting [=======> ] 13.37MB/91.87MB f3b09c502777 Extracting [====> ] 5.014MB/56.52MB 85dde7dceb0a Downloading [=============> ] 16.76MB/63.48MB dcc0c3b2850c Extracting [============> ] 18.94MB/76.12MB 04f6155c873d Downloading [=====================> ] 45.96MB/107.3MB 65d25c0f02f3 Extracting [=======================> ] 13.57MB/28.98MB eabd8714fec9 Downloading [========================================> ] 300.1MB/375MB c124ba1a8b26 Extracting [===========> ] 21.17MB/91.87MB 85dde7dceb0a Downloading [=====================> ] 27.03MB/63.48MB dcc0c3b2850c Extracting [================> ] 25.62MB/76.12MB 04f6155c873d Downloading [===========================> ] 58.93MB/107.3MB 65d25c0f02f3 Extracting [============================> ] 16.52MB/28.98MB 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB f3b09c502777 Extracting [======> ] 7.799MB/56.52MB 7910fddefabc Pull complete eabd8714fec9 Downloading [=========================================> ] 310.9MB/375MB policy-db-migrator Pulled c124ba1a8b26 Extracting [================> ] 30.08MB/91.87MB 85dde7dceb0a Downloading [==============================> ] 38.39MB/63.48MB dcc0c3b2850c Extracting [=====================> ] 32.31MB/76.12MB 04f6155c873d Downloading [=================================> ] 70.83MB/107.3MB 30bb92ff0608 Extracting [====================> ] 3.637MB/8.735MB 65d25c0f02f3 Extracting [===============================> ] 18.28MB/28.98MB f3b09c502777 Extracting [========> ] 9.47MB/56.52MB eabd8714fec9 Downloading [==========================================> ] 322.2MB/375MB 85dde7dceb0a Downloading [==========================================> ] 53.53MB/63.48MB c124ba1a8b26 Extracting [====================> ] 36.77MB/91.87MB dcc0c3b2850c Extracting [=========================> ] 39.55MB/76.12MB 04f6155c873d Downloading [=====================================> ] 80.02MB/107.3MB 30bb92ff0608 Extracting [==================================> ] 6.095MB/8.735MB 65d25c0f02f3 Extracting [============================================> ] 25.66MB/28.98MB 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete f3b09c502777 Extracting [==========> ] 12.26MB/56.52MB 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete eabd8714fec9 Downloading [============================================> ] 335.8MB/375MB 65d25c0f02f3 Pull complete 538deb30e80c Download complete c124ba1a8b26 Extracting [=======================> ] 42.89MB/91.87MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB dcc0c3b2850c Extracting [===============================> ] 48.46MB/76.12MB 30bb92ff0608 Extracting [================================================> ] 8.454MB/8.735MB 04f6155c873d Downloading [==========================================> ] 90.29MB/107.3MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB f3b09c502777 Extracting [=============> ] 15.04MB/56.52MB eabd8714fec9 Downloading [==============================================> ] 347.1MB/375MB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 90dd78f85976 Extracting [> ] 426kB/41.49MB c124ba1a8b26 Extracting [=============================> ] 53.48MB/91.87MB 2d429b9e73a6 Downloading [===============> ] 9.141MB/29.13MB dcc0c3b2850c Extracting [====================================> ] 56.26MB/76.12MB 04f6155c873d Downloading [==============================================> ] 100MB/107.3MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete eabd8714fec9 Downloading [===============================================> ] 359.5MB/375MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum f3b09c502777 Extracting [===============> ] 17.83MB/56.52MB 90dd78f85976 Extracting [=====> ] 4.26MB/41.49MB c124ba1a8b26 Extracting [==================================> ] 64.06MB/91.87MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 2d429b9e73a6 Downloading [==================================> ] 20.05MB/29.13MB dcc0c3b2850c Extracting [==========================================> ] 64.62MB/76.12MB 807a2e881ecd Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB eabd8714fec9 Downloading [=================================================> ] 374.1MB/375MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 90dd78f85976 Extracting [==========> ] 8.52MB/41.49MB c4d302cc468d Downloading [==================================> ] 3.145MB/4.534MB 2d429b9e73a6 Downloading [==============================================> ] 27.13MB/29.13MB c124ba1a8b26 Extracting [=======================================> ] 72.42MB/91.87MB f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB dcc0c3b2850c Extracting [=================================================> ] 75.76MB/76.12MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete dcc0c3b2850c Pull complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete eabd8714fec9 Extracting [> ] 557.1kB/375MB a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Download complete 90dd78f85976 Extracting [=============> ] 11.08MB/41.49MB c124ba1a8b26 Extracting [===========================================> ] 79.1MB/91.87MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4a4d0948b0bf Pull complete 531ee2cf3c0c Downloading [===================================> ] 5.651MB/8.066MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB eabd8714fec9 Extracting [> ] 7.242MB/375MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB c124ba1a8b26 Extracting [==============================================> ] 86.34MB/91.87MB f3b09c502777 Extracting [======================> ] 25.62MB/56.52MB 90dd78f85976 Extracting [================> ] 14.06MB/41.49MB e73cb4a42719 Downloading [===> ] 7.028MB/109.1MB eb7cda286a15 Pull complete api Pulled 2d429b9e73a6 Extracting [======> ] 3.834MB/29.13MB 1e017ebebdbd Downloading [========> ] 6.028MB/37.19MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB 55f2b468da67 Downloading [> ] 4.324MB/257.9MB eabd8714fec9 Extracting [==> ] 15.04MB/375MB c124ba1a8b26 Pull complete e73cb4a42719 Downloading [======> ] 14.6MB/109.1MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 90dd78f85976 Extracting [====================> ] 16.61MB/41.49MB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 2d429b9e73a6 Extracting [==========> ] 6.193MB/29.13MB 1e017ebebdbd Downloading [===============> ] 11.3MB/37.19MB 04f6155c873d Extracting [=> ] 2.785MB/107.3MB 55f2b468da67 Downloading [=> ] 9.19MB/257.9MB eabd8714fec9 Extracting [==> ] 20.61MB/375MB 90dd78f85976 Extracting [======================> ] 18.32MB/41.49MB e73cb4a42719 Downloading [============> ] 26.49MB/109.1MB f3b09c502777 Extracting [==================================> ] 38.44MB/56.52MB 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 1e017ebebdbd Downloading [========================> ] 18.46MB/37.19MB 04f6155c873d Extracting [==> ] 5.014MB/107.3MB 6394804c2196 Pull complete 55f2b468da67 Downloading [==> ] 12.98MB/257.9MB pap Pulled 90dd78f85976 Extracting [===========================> ] 23MB/41.49MB e73cb4a42719 Downloading [==================> ] 39.47MB/109.1MB f3b09c502777 Extracting [===========================================> ] 49.58MB/56.52MB 1e017ebebdbd Downloading [==================================> ] 26MB/37.19MB 2d429b9e73a6 Extracting [=================> ] 10.32MB/29.13MB 55f2b468da67 Downloading [===> ] 20.54MB/257.9MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB e73cb4a42719 Downloading [======================> ] 48.66MB/109.1MB 04f6155c873d Extracting [===> ] 7.242MB/107.3MB 90dd78f85976 Extracting [====================================> ] 30.67MB/41.49MB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 1e017ebebdbd Downloading [=================================================> ] 36.55MB/37.19MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 2d429b9e73a6 Extracting [=====================> ] 12.68MB/29.13MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 55f2b468da67 Downloading [=====> ] 29.74MB/257.9MB eabd8714fec9 Extracting [===> ] 29.52MB/375MB e73cb4a42719 Downloading [===========================> ] 60.01MB/109.1MB 90dd78f85976 Extracting [========================================> ] 33.65MB/41.49MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 04f6155c873d Extracting [====> ] 10.58MB/107.3MB 2d429b9e73a6 Extracting [==========================> ] 15.63MB/29.13MB 82bfc142787e Downloading [==============> ] 2.555MB/8.613MB 55f2b468da67 Downloading [========> ] 43.79MB/257.9MB eabd8714fec9 Extracting [====> ] 37.32MB/375MB e73cb4a42719 Downloading [=================================> ] 72.99MB/109.1MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB f3b09c502777 Pull complete 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B 90dd78f85976 Extracting [============================================> ] 37.06MB/41.49MB 04f6155c873d Extracting [=====> ] 12.81MB/107.3MB 82bfc142787e Downloading [=================================> ] 5.701MB/8.613MB 2d429b9e73a6 Extracting [===============================> ] 18.58MB/29.13MB 55f2b468da67 Downloading [==========> ] 56.23MB/257.9MB eabd8714fec9 Extracting [======> ] 46.24MB/375MB e73cb4a42719 Downloading [======================================> ] 84.34MB/109.1MB 1e017ebebdbd Extracting [=====> ] 3.932MB/37.19MB 90dd78f85976 Extracting [=================================================> ] 40.89MB/41.49MB 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 04f6155c873d Extracting [======> ] 14.48MB/107.3MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 2d429b9e73a6 Extracting [=====================================> ] 21.82MB/29.13MB 55f2b468da67 Downloading [=============> ] 68.66MB/257.9MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete eabd8714fec9 Extracting [======> ] 52.36MB/375MB 90dd78f85976 Pull complete e73cb4a42719 Downloading [===========================================> ] 95.16MB/109.1MB 4f4fb700ef54 Extracting [==================================================>] 32B/32B 4f4fb700ef54 Extracting [==================================================>] 32B/32B 408012a7b118 Pull complete b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 55f2b468da67 Downloading [===============> ] 80.02MB/257.9MB 2d429b9e73a6 Extracting [=========================================> ] 24.18MB/29.13MB eabd8714fec9 Extracting [=======> ] 59.05MB/375MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 04f6155c873d Extracting [=======> ] 16.71MB/107.3MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB b0e0ef7895f4 Downloading [===> ] 2.26MB/37.01MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 4f4fb700ef54 Pull complete 44986281b8b9 Pull complete 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 55f2b468da67 Downloading [==================> ] 94.08MB/257.9MB opa-pdp Pulled eabd8714fec9 Extracting [========> ] 64.06MB/375MB e040ea11fa10 Download complete 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB b0e0ef7895f4 Downloading [======> ] 4.898MB/37.01MB 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 1e017ebebdbd Extracting [===============> ] 11.8MB/37.19MB 55f2b468da67 Downloading [=====================> ] 110.8MB/257.9MB eabd8714fec9 Extracting [=========> ] 70.19MB/375MB bf70c5107ab5 Pull complete 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB b0e0ef7895f4 Downloading [==============> ] 10.55MB/37.01MB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 09d5a3f70313 Downloading [===> ] 7.028MB/109.2MB 04f6155c873d Extracting [=========> ] 19.5MB/107.3MB 1e017ebebdbd Extracting [===================> ] 14.16MB/37.19MB 55f2b468da67 Downloading [=======================> ] 122.2MB/257.9MB eabd8714fec9 Extracting [==========> ] 80.22MB/375MB b0e0ef7895f4 Downloading [=========================> ] 19.22MB/37.01MB 09d5a3f70313 Downloading [=======> ] 16.22MB/109.2MB 04f6155c873d Extracting [==========> ] 22.28MB/107.3MB 1e017ebebdbd Extracting [======================> ] 16.52MB/37.19MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 55f2b468da67 Downloading [==========================> ] 136.2MB/257.9MB eabd8714fec9 Extracting [===========> ] 86.9MB/375MB 1ccde423731d Pull complete 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B b0e0ef7895f4 Downloading [=======================================> ] 29.01MB/37.01MB 04f6155c873d Extracting [===========> ] 25.62MB/107.3MB 09d5a3f70313 Downloading [===========> ] 25.95MB/109.2MB 1e017ebebdbd Extracting [=========================> ] 19.27MB/37.19MB 55f2b468da67 Downloading [============================> ] 146MB/257.9MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eabd8714fec9 Extracting [============> ] 93.03MB/375MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete 09d5a3f70313 Downloading [=================> ] 38.93MB/109.2MB 04f6155c873d Extracting [==============> ] 30.08MB/107.3MB 55f2b468da67 Downloading [==============================> ] 159.5MB/257.9MB 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 09d5a3f70313 Downloading [========================> ] 52.44MB/109.2MB 55f2b468da67 Downloading [=================================> ] 174.6MB/257.9MB 04f6155c873d Extracting [===============> ] 33.98MB/107.3MB 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB 55f2b468da67 Downloading [===================================> ] 185.4MB/257.9MB 09d5a3f70313 Downloading [==============================> ] 66.5MB/109.2MB 04f6155c873d Extracting [================> ] 36.21MB/107.3MB 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB eabd8714fec9 Extracting [==============> ] 108.1MB/375MB 09d5a3f70313 Downloading [==================================> ] 76.23MB/109.2MB 55f2b468da67 Downloading [======================================> ] 196.8MB/257.9MB 04f6155c873d Extracting [=================> ] 38.44MB/107.3MB 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 46eab5b44a35 Pull complete 7df673c7455d Pull complete eabd8714fec9 Extracting [==============> ] 110.9MB/375MB 09d5a3f70313 Downloading [=========================================> ] 89.75MB/109.2MB 55f2b468da67 Downloading [========================================> ] 210.3MB/257.9MB 04f6155c873d Extracting [===================> ] 41.78MB/107.3MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB eabd8714fec9 Extracting [===============> ] 114.2MB/375MB 09d5a3f70313 Downloading [===============================================> ] 103.8MB/109.2MB 55f2b468da67 Downloading [===========================================> ] 223.3MB/257.9MB 04f6155c873d Extracting [=====================> ] 46.79MB/107.3MB c4d302cc468d Extracting [> ] 65.54kB/4.534MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 1e017ebebdbd Extracting [================================================> ] 35.78MB/37.19MB eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 55f2b468da67 Downloading [=============================================> ] 236.8MB/257.9MB 04f6155c873d Extracting [=======================> ] 50.14MB/107.3MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB eabd8714fec9 Extracting [================> ] 122.6MB/375MB 55f2b468da67 Downloading [================================================> ] 250.9MB/257.9MB c4d302cc468d Extracting [=========================================> ] 3.801MB/4.534MB 04f6155c873d Extracting [========================> ] 52.92MB/107.3MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB eabd8714fec9 Extracting [================> ] 126.5MB/375MB 04f6155c873d Extracting [=========================> ] 55.15MB/107.3MB 1e017ebebdbd Pull complete eabd8714fec9 Extracting [=================> ] 129.8MB/375MB 04f6155c873d Extracting [===========================> ] 58.49MB/107.3MB prometheus Pulled eabd8714fec9 Extracting [=================> ] 132MB/375MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 04f6155c873d Extracting [=============================> ] 63.5MB/107.3MB c4d302cc468d Pull complete 55f2b468da67 Extracting [==> ] 10.58MB/257.9MB 04f6155c873d Extracting [==============================> ] 66.29MB/107.3MB eabd8714fec9 Extracting [==================> ] 137MB/375MB 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB 04f6155c873d Extracting [================================> ] 69.63MB/107.3MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB eabd8714fec9 Extracting [==================> ] 140.4MB/375MB 04f6155c873d Extracting [==================================> ] 72.97MB/107.3MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB eabd8714fec9 Extracting [===================> ] 144.3MB/375MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 04f6155c873d Extracting [==================================> ] 74.65MB/107.3MB 55f2b468da67 Extracting [======> ] 32.31MB/257.9MB 04f6155c873d Extracting [====================================> ] 77.43MB/107.3MB eabd8714fec9 Extracting [===================> ] 149.8MB/375MB 55f2b468da67 Extracting [========> ] 46.24MB/257.9MB 04f6155c873d Extracting [=====================================> ] 80.22MB/107.3MB eabd8714fec9 Extracting [====================> ] 154.3MB/375MB 55f2b468da67 Extracting [===========> ] 58.49MB/257.9MB 04f6155c873d Extracting [=======================================> ] 84.12MB/107.3MB eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 55f2b468da67 Extracting [=============> ] 70.19MB/257.9MB 04f6155c873d Extracting [==========================================> ] 90.8MB/107.3MB 01e0882c90d9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB eabd8714fec9 Extracting [=====================> ] 162.1MB/375MB 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB 04f6155c873d Extracting [=============================================> ] 96.93MB/107.3MB eabd8714fec9 Extracting [======================> ] 167.1MB/375MB 55f2b468da67 Extracting [==================> ] 96.93MB/257.9MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 04f6155c873d Extracting [==============================================> ] 100.3MB/107.3MB eabd8714fec9 Extracting [=======================> ] 176MB/375MB 55f2b468da67 Extracting [===================> ] 102.5MB/257.9MB 531ee2cf3c0c Extracting [==========================> ] 4.325MB/8.066MB eabd8714fec9 Extracting [========================> ] 186.6MB/375MB 531ee2cf3c0c Extracting [=====================================> ] 5.997MB/8.066MB 55f2b468da67 Extracting [====================> ] 107.5MB/257.9MB 04f6155c873d Extracting [================================================> ] 103.1MB/107.3MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eabd8714fec9 Extracting [==========================> ] 197.2MB/375MB 55f2b468da67 Extracting [=====================> ] 112MB/257.9MB 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB 04f6155c873d Extracting [================================================> ] 104.2MB/107.3MB eabd8714fec9 Extracting [===========================> ] 206.1MB/375MB 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB 04f6155c873d Extracting [=================================================> ] 105.3MB/107.3MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 531ee2cf3c0c Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [=======================> ] 123.7MB/257.9MB 04f6155c873d Pull complete eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB ed54a7dee1d8 Extracting [===========================> ] 655.4kB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB 12c5c803443f Pull complete eabd8714fec9 Extracting [==============================> ] 227.8MB/375MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 55f2b468da67 Extracting [==========================> ] 136.5MB/257.9MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB e27c75a98748 Pull complete 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB eabd8714fec9 Extracting [===============================> ] 235.1MB/375MB 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 55f2b468da67 Extracting [============================> ] 148.2MB/257.9MB eabd8714fec9 Extracting [================================> ] 241.8MB/375MB e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB 55f2b468da67 Extracting [=============================> ] 151MB/257.9MB e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB e73cb4a42719 Extracting [====> ] 10.58MB/109.1MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [==============================> ] 157.1MB/257.9MB eabd8714fec9 Extracting [=================================> ] 251.2MB/375MB e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB eabd8714fec9 Extracting [==================================> ] 255.1MB/375MB e73cb4a42719 Extracting [========> ] 18.94MB/109.1MB 85dde7dceb0a Extracting [=========> ] 11.7MB/63.48MB 55f2b468da67 Extracting [===============================> ] 163.2MB/257.9MB eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 85dde7dceb0a Extracting [==========> ] 13.93MB/63.48MB 55f2b468da67 Extracting [================================> ] 166MB/257.9MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 55f2b468da67 Extracting [================================> ] 169.9MB/257.9MB eabd8714fec9 Extracting [===================================> ] 266.3MB/375MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB e73cb4a42719 Extracting [=============> ] 29.52MB/109.1MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 85dde7dceb0a Extracting [==============> ] 18.38MB/63.48MB e73cb4a42719 Extracting [===============> ] 33.42MB/109.1MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB 85dde7dceb0a Extracting [===============> ] 20.05MB/63.48MB eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB e73cb4a42719 Extracting [=================> ] 38.99MB/109.1MB 85dde7dceb0a Extracting [=================> ] 22.84MB/63.48MB e73cb4a42719 Extracting [===================> ] 43.45MB/109.1MB eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 85dde7dceb0a Extracting [====================> ] 25.62MB/63.48MB e73cb4a42719 Extracting [=====================> ] 47.91MB/109.1MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 85dde7dceb0a Extracting [======================> ] 28.41MB/63.48MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 85dde7dceb0a Extracting [=======================> ] 30.08MB/63.48MB e73cb4a42719 Extracting [========================> ] 52.92MB/109.1MB eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 55f2b468da67 Extracting [==================================> ] 179.9MB/257.9MB 85dde7dceb0a Extracting [=========================> ] 31.75MB/63.48MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB eabd8714fec9 Extracting [=====================================> ] 278.5MB/375MB 55f2b468da67 Extracting [===================================> ] 183.3MB/257.9MB 85dde7dceb0a Extracting [===========================> ] 35.09MB/63.48MB e73cb4a42719 Extracting [==========================> ] 57.93MB/109.1MB eabd8714fec9 Extracting [=====================================> ] 282.4MB/375MB 55f2b468da67 Extracting [====================================> ] 188.8MB/257.9MB 85dde7dceb0a Extracting [=============================> ] 37.88MB/63.48MB e73cb4a42719 Extracting [============================> ] 61.28MB/109.1MB eabd8714fec9 Extracting [======================================> ] 288MB/375MB 85dde7dceb0a Extracting [==============================> ] 38.44MB/63.48MB e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 293MB/375MB 85dde7dceb0a Extracting [================================> ] 40.67MB/63.48MB e73cb4a42719 Extracting [==============================> ] 66.85MB/109.1MB 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB 85dde7dceb0a Extracting [=================================> ] 42.89MB/63.48MB e73cb4a42719 Extracting [================================> ] 71.86MB/109.1MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB e73cb4a42719 Extracting [==================================> ] 76.32MB/109.1MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 85dde7dceb0a Extracting [===================================> ] 45.68MB/63.48MB 55f2b468da67 Extracting [======================================> ] 198.9MB/257.9MB e73cb4a42719 Extracting [=====================================> ] 81.33MB/109.1MB 85dde7dceb0a Extracting [======================================> ] 48.46MB/63.48MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB 55f2b468da67 Extracting [======================================> ] 200.5MB/257.9MB e73cb4a42719 Extracting [=======================================> ] 86.34MB/109.1MB eabd8714fec9 Extracting [========================================> ] 300.8MB/375MB 85dde7dceb0a Extracting [=======================================> ] 50.69MB/63.48MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB e73cb4a42719 Extracting [=========================================> ] 90.8MB/109.1MB 85dde7dceb0a Extracting [=========================================> ] 52.92MB/63.48MB eabd8714fec9 Extracting [========================================> ] 303MB/375MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 85dde7dceb0a Extracting [===========================================> ] 55.15MB/63.48MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Extracting [========================================> ] 207.2MB/257.9MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB 85dde7dceb0a Extracting [=================================================> ] 62.95MB/63.48MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 310.8MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 212.8MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 312.5MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 85dde7dceb0a Pull complete 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 316.4MB/375MB 55f2b468da67 Extracting [==========================================> ] 217.8MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 318.6MB/375MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 55f2b468da67 Extracting [===========================================> ] 222.8MB/257.9MB e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB eabd8714fec9 Extracting [==========================================> ] 320.9MB/375MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 7009d5001b77 Pull complete 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 321.4MB/375MB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 55f2b468da67 Extracting [===========================================> ] 224.5MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 322.5MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.2MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 323.1MB/375MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 325.3MB/375MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB e73cb4a42719 Pull complete 538deb30e80c Pull complete eabd8714fec9 Extracting [===========================================> ] 327.5MB/375MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 329.8MB/375MB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 55f2b468da67 Extracting [=============================================> ] 234.5MB/257.9MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB a83b68436f09 Pull complete 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB grafana Pulled 55f2b468da67 Extracting [==============================================> ] 240.6MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 55f2b468da67 Extracting [===============================================> ] 245.1MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 55f2b468da67 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 7e568a0dc8fb Pull complete postgres Pulled 82bfc142787e Extracting [=======================> ] 4.129MB/8.613MB eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 82bfc142787e Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 46baca71a4ef Pull complete eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB eabd8714fec9 Extracting [===============================================> ] 355.4MB/375MB b0e0ef7895f4 Extracting [=================> ] 12.98MB/37.01MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB b0e0ef7895f4 Extracting [================================> ] 23.99MB/37.01MB eabd8714fec9 Extracting [================================================> ] 361.5MB/375MB b0e0ef7895f4 Extracting [===============================================> ] 35MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB eabd8714fec9 Extracting [=================================================> ] 367.7MB/375MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [=================================================> ] 372.7MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB e040ea11fa10 Pull complete 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 8f10199ed94b Extracting [============> ] 2.163MB/8.768MB 09d5a3f70313 Extracting [=====> ] 12.81MB/109.2MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09d5a3f70313 Extracting [============> ] 27.3MB/109.2MB 09d5a3f70313 Extracting [===================> ] 43.45MB/109.2MB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 09d5a3f70313 Extracting [==========================> ] 58.49MB/109.2MB f3a82e9f1761 Extracting [===============> ] 13.76MB/44.41MB 09d5a3f70313 Extracting [===================================> ] 76.87MB/109.2MB f3a82e9f1761 Extracting [================================> ] 28.44MB/44.41MB 09d5a3f70313 Extracting [===========================================> ] 94.7MB/109.2MB f3a82e9f1761 Extracting [=================================================> ] 44.04MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 09d5a3f70313 Extracting [================================================> ] 105.3MB/109.2MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 79161a3f5362 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 356f5c2c843b Pull complete 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B kafka Pulled 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [=====> ] 15.04MB/127.4MB da3ed5db7103 Extracting [============> ] 30.64MB/127.4MB da3ed5db7103 Extracting [==================> ] 47.91MB/127.4MB da3ed5db7103 Extracting [=========================> ] 65.73MB/127.4MB da3ed5db7103 Extracting [================================> ] 83MB/127.4MB da3ed5db7103 Extracting [=======================================> ] 100.3MB/127.4MB da3ed5db7103 Extracting [==============================================> ] 118.1MB/127.4MB da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container postgres Creating Container prometheus Creating Container zookeeper Creating Container postgres Created Container zookeeper Created Container policy-db-migrator Creating Container kafka Creating Container prometheus Created Container grafana Creating Container policy-db-migrator Created Container policy-api Creating Container kafka Created Container grafana Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-opa-pdp Creating Container policy-opa-pdp Created Container postgres Starting Container zookeeper Starting Container prometheus Starting Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container zookeeper Started Container kafka Starting Container policy-api Started Container kafka Started Container policy-pap Starting Container policy-pap Started Container policy-opa-pdp Starting Container policy-opa-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 3 minutes for OPA-PDP to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Checking if REST port 30012 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:defa6d33036828a8c26a0cce07e0f5e0a77d564fcf50267aeb5379b8f1139f35 top - 15:22:51 up 6 min, 0 users, load average: 0.89, 1.12, 0.60 Tasks: 218 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.0 us, 2.7 sy, 0.0 ni, 84.4 id, 1.7 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.3G 21G 28M 7.3G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4368837609c2 policy-opa-pdp 0.27% 11.94MiB / 31.41GiB 0.04% 81.5kB / 76.6kB 12.3kB / 0B 21 900a27ebb6e2 policy-pap 0.78% 482.9MiB / 31.41GiB 1.50% 2.21MB / 1.23MB 0B / 139MB 69 66194e852ac6 policy-api 0.12% 416.8MiB / 31.41GiB 1.30% 1.15MB / 1.08MB 0B / 0B 60 944f0aa44b21 grafana 0.12% 113.3MiB / 31.41GiB 0.35% 19MB / 196kB 0B / 30.4MB 20 1e1037642104 kafka 2.35% 400.6MiB / 31.41GiB 1.25% 309kB / 293kB 0B / 692kB 83 93f2b4bdb0c5 zookeeper 0.08% 84.66MiB / 31.41GiB 0.26% 56.7kB / 49.6kB 0B / 475kB 62 8e4ee741251b postgres 0.01% 86.58MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 217kB / 159MB 26 5fb3dc443b8f prometheus 0.00% 21.28MiB / 31.41GiB 0.07% 239kB / 10.3kB 12.3kB / 0B 13 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-18T15:19:05.355353046Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-18T15:19:05Z grafana | logger=settings t=2025-06-18T15:19:05.355649398Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-18T15:19:05.355661568Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-18T15:19:05.355665648Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-18T15:19:05.355668498Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-18T15:19:05.355671368Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-18T15:19:05.355675038Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-18T15:19:05.355678018Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-18T15:19:05.355680918Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-18T15:19:05.355684658Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-18T15:19:05.355687558Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-18T15:19:05.355690738Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-18T15:19:05.355694698Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-18T15:19:05.355708148Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-18T15:19:05.355711408Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-18T15:19:05.355715678Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-18T15:19:05.355720009Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-18T15:19:05.355723739Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-18T15:19:05.355727229Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-18T15:19:05.356067961Z level=info msg=FeatureToggles azureMonitorPrometheusExemplars=true alertingRulePermanentlyDelete=true dashboardSceneSolo=true groupToNestedTableTransformation=true kubernetesClientDashboardsFolders=true newDashboardSharingComponent=true lokiLabelNamesQueryApi=true grafanaconThemes=true lokiStructuredMetadata=true nestedFolders=true recordedQueriesMulti=true recoveryThreshold=true dashboardScene=true tlsMemcached=true logsInfiniteScrolling=true unifiedRequestLog=true logsExploreTableVisualisation=true panelMonitoring=true newPDFRendering=true addFieldFromCalculationStatFunctions=true reportingUseRawTimeRange=true angularDeprecationUI=true formatString=true cloudWatchNewLabelParsing=true influxdbBackendMigration=true transformationsRedesign=true alertingInsights=true useSessionStorageForRedirection=true alertingApiServer=true prometheusUsesCombobox=true cloudWatchCrossAccountQuerying=true alertingSimplifiedRouting=true prometheusAzureOverrideAudience=true ssoSettingsSAML=true ssoSettingsApi=true logsPanelControls=true alertingRuleRecoverDeleted=true pinNavItems=true newFiltersUI=true externalCorePlugins=true dashboardSceneForViewers=true onPremToCloudMigrations=true alertingRuleVersionHistoryRestore=true azureMonitorEnableUserAuth=true dashgpt=true alertingUIOptimizeReducer=true lokiQueryHints=true promQLScope=true awsAsyncQueryCaching=true preinstallAutoUpdate=true unifiedStorageSearchPermissionFiltering=true lokiQuerySplitting=true dataplaneFrontendFallback=true correlations=true logsContextDatasourceUi=true publicDashboardsScene=true pluginsDetailsRightPanel=true alertingQueryAndExpressionsStepMode=true kubernetesPlaylists=true alertRuleRestore=true annotationPermissionUpdate=true logRowsPopoverMenu=true failWrongDSUID=true cloudWatchRoundUpEndTime=true alertingNotificationsStepMode=true grafana | logger=sqlstore t=2025-06-18T15:19:05.356127812Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-18T15:19:05.356141132Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-18T15:19:05.358070027Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-18T15:19:05.358080957Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-18T15:19:05.358727722Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-18T15:19:05.359610489Z level=info msg="Migration successfully executed" id="create migration_log table" duration=883.327µs grafana | logger=migrator t=2025-06-18T15:19:05.368484428Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-18T15:19:05.369292014Z level=info msg="Migration successfully executed" id="create user table" duration=807.116µs grafana | logger=migrator t=2025-06-18T15:19:05.374129022Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-18T15:19:05.375010348Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=879.826µs grafana | logger=migrator t=2025-06-18T15:19:05.379491633Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-18T15:19:05.380565631Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.073568ms grafana | logger=migrator t=2025-06-18T15:19:05.386313856Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-18T15:19:05.38694272Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=628.854µs grafana | logger=migrator t=2025-06-18T15:19:05.391070293Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-18T15:19:05.391799759Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=728.256µs grafana | logger=migrator t=2025-06-18T15:19:05.395968981Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-18T15:19:05.399910392Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.940721ms grafana | logger=migrator t=2025-06-18T15:19:05.404061943Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-18T15:19:05.40485345Z level=info msg="Migration successfully executed" id="create user table v2" duration=791.307µs grafana | logger=migrator t=2025-06-18T15:19:05.410260542Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-18T15:19:05.411638602Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.378ms grafana | logger=migrator t=2025-06-18T15:19:05.41523318Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-18T15:19:05.416148028Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=915.208µs grafana | logger=migrator t=2025-06-18T15:19:05.420797813Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:05.421349058Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=550.805µs grafana | logger=migrator t=2025-06-18T15:19:05.427639877Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-18T15:19:05.428293911Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=653.554µs grafana | logger=migrator t=2025-06-18T15:19:05.432499934Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-18T15:19:05.433839044Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.33859ms grafana | logger=migrator t=2025-06-18T15:19:05.437277481Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-18T15:19:05.437307261Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.59µs grafana | logger=migrator t=2025-06-18T15:19:05.450645925Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-18T15:19:05.45247576Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.828965ms grafana | logger=migrator t=2025-06-18T15:19:05.456878893Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-18T15:19:05.457214276Z level=info msg="Migration successfully executed" id="Add missing user data" duration=334.453µs grafana | logger=migrator t=2025-06-18T15:19:05.461677801Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-18T15:19:05.46288185Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.203628ms grafana | logger=migrator t=2025-06-18T15:19:05.467733277Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-18T15:19:05.468562924Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=826.317µs grafana | logger=migrator t=2025-06-18T15:19:05.474642221Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-18T15:19:05.47585694Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.214229ms grafana | logger=migrator t=2025-06-18T15:19:05.480062953Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-18T15:19:05.488055395Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.991782ms grafana | logger=migrator t=2025-06-18T15:19:05.49245887Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-18T15:19:05.493401667Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=942.387µs grafana | logger=migrator t=2025-06-18T15:19:05.49765787Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-18T15:19:05.497990392Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=332.062µs grafana | logger=migrator t=2025-06-18T15:19:05.504558973Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-18T15:19:05.505442951Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=883.078µs grafana | logger=migrator t=2025-06-18T15:19:05.509579312Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-18T15:19:05.510816442Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.23674ms grafana | logger=migrator t=2025-06-18T15:19:05.515237136Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-18T15:19:05.515692299Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=453.713µs grafana | logger=migrator t=2025-06-18T15:19:05.522379982Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-18T15:19:05.523367749Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=985.707µs grafana | logger=migrator t=2025-06-18T15:19:05.527787224Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-18T15:19:05.52865467Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=863.646µs grafana | logger=migrator t=2025-06-18T15:19:05.533064515Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-18T15:19:05.533517378Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=452.063µs grafana | logger=migrator t=2025-06-18T15:19:05.536829144Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-18T15:19:05.53776232Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=932.676µs grafana | logger=migrator t=2025-06-18T15:19:05.544209491Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-18T15:19:05.545119758Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=909.857µs grafana | logger=migrator t=2025-06-18T15:19:05.548323723Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-18T15:19:05.549739523Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.41503ms grafana | logger=migrator t=2025-06-18T15:19:05.554299719Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-18T15:19:05.555555149Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.2549ms grafana | logger=migrator t=2025-06-18T15:19:05.56218199Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-18T15:19:05.562967406Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=785.276µs grafana | logger=migrator t=2025-06-18T15:19:05.566370822Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-18T15:19:05.566567035Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=40.451µs grafana | logger=migrator t=2025-06-18T15:19:05.571462162Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-18T15:19:05.572555221Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.094669ms grafana | logger=migrator t=2025-06-18T15:19:05.575792076Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-18T15:19:05.576934534Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.141268ms grafana | logger=migrator t=2025-06-18T15:19:05.580358351Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-18T15:19:05.581080067Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=721.276µs grafana | logger=migrator t=2025-06-18T15:19:05.593637484Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-18T15:19:05.59434158Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=703.596µs grafana | logger=migrator t=2025-06-18T15:19:05.597990019Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:05.601149973Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.158065ms grafana | logger=migrator t=2025-06-18T15:19:05.603886234Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-18T15:19:05.60472625Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=839.686µs grafana | logger=migrator t=2025-06-18T15:19:05.611066199Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-18T15:19:05.611822336Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=755.167µs grafana | logger=migrator t=2025-06-18T15:19:05.615880457Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-18T15:19:05.616633082Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=752.215µs grafana | logger=migrator t=2025-06-18T15:19:05.621043087Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-18T15:19:05.621733622Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=690.355µs grafana | logger=migrator t=2025-06-18T15:19:05.628971938Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-18T15:19:05.629713654Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=741.286µs grafana | logger=migrator t=2025-06-18T15:19:05.634161859Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:05.634556362Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=394.523µs grafana | logger=migrator t=2025-06-18T15:19:05.638510792Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:05.639005146Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=493.924µs grafana | logger=migrator t=2025-06-18T15:19:05.646020131Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-18T15:19:05.646391214Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=368.873µs grafana | logger=migrator t=2025-06-18T15:19:05.650311474Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-18T15:19:05.650952519Z level=info msg="Migration successfully executed" id="create star table" duration=640.835µs grafana | logger=migrator t=2025-06-18T15:19:05.654844559Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-18T15:19:05.655553154Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=708.395µs grafana | logger=migrator t=2025-06-18T15:19:05.658451577Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-18T15:19:05.659779318Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.327101ms grafana | logger=migrator t=2025-06-18T15:19:05.776638123Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-18T15:19:05.778969412Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=2.332289ms grafana | logger=migrator t=2025-06-18T15:19:05.82387489Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-18T15:19:05.826349089Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.469169ms grafana | logger=migrator t=2025-06-18T15:19:05.830519362Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-18T15:19:05.831775931Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.256259ms grafana | logger=migrator t=2025-06-18T15:19:05.837680307Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-18T15:19:05.838415443Z level=info msg="Migration successfully executed" id="create org table v1" duration=732.326µs grafana | logger=migrator t=2025-06-18T15:19:05.841719859Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-18T15:19:05.842445524Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=722.945µs grafana | logger=migrator t=2025-06-18T15:19:05.845582959Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-18T15:19:05.846371775Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=788.086µs grafana | logger=migrator t=2025-06-18T15:19:05.84958896Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-18T15:19:05.850480387Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=890.577µs grafana | logger=migrator t=2025-06-18T15:19:05.854695949Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-18T15:19:05.855637387Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=941.108µs grafana | logger=migrator t=2025-06-18T15:19:05.860023801Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-18T15:19:05.860887338Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=863.177µs grafana | logger=migrator t=2025-06-18T15:19:05.868209274Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-18T15:19:05.868239854Z level=info msg="Migration successfully executed" id="Update org table charset" duration=31.59µs grafana | logger=migrator t=2025-06-18T15:19:05.881946961Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-18T15:19:05.882183672Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=242.111µs grafana | logger=migrator t=2025-06-18T15:19:05.8857645Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-18T15:19:05.886453236Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=687.286µs grafana | logger=migrator t=2025-06-18T15:19:05.890693338Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-18T15:19:05.892710934Z level=info msg="Migration successfully executed" id="create dashboard table" duration=2.052956ms grafana | logger=migrator t=2025-06-18T15:19:05.896244662Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-18T15:19:05.89720463Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=960.008µs grafana | logger=migrator t=2025-06-18T15:19:05.901510453Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-18T15:19:05.902401129Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=890.376µs grafana | logger=migrator t=2025-06-18T15:19:05.90511549Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-18T15:19:05.905890037Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=773.987µs grafana | logger=migrator t=2025-06-18T15:19:05.908648418Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-18T15:19:05.909538605Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=889.587µs grafana | logger=migrator t=2025-06-18T15:19:05.916596399Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-18T15:19:05.917424826Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=828.457µs grafana | logger=migrator t=2025-06-18T15:19:05.919936206Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-18T15:19:05.92702688Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.083934ms grafana | logger=migrator t=2025-06-18T15:19:05.930000783Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-18T15:19:05.93084168Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=840.487µs grafana | logger=migrator t=2025-06-18T15:19:05.933212708Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-18T15:19:05.933937004Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=722.176µs grafana | logger=migrator t=2025-06-18T15:19:05.940494955Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-18T15:19:05.941304751Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=809.486µs grafana | logger=migrator t=2025-06-18T15:19:05.944085092Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:05.944407785Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=322.583µs grafana | logger=migrator t=2025-06-18T15:19:05.947260437Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-18T15:19:05.948368816Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.106549ms grafana | logger=migrator t=2025-06-18T15:19:05.953294525Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-18T15:19:05.953319025Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=25.81µs grafana | logger=migrator t=2025-06-18T15:19:05.956437079Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-18T15:19:05.958346233Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.907074ms grafana | logger=migrator t=2025-06-18T15:19:05.961125465Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-18T15:19:05.962954179Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.828284ms grafana | logger=migrator t=2025-06-18T15:19:05.970491628Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-18T15:19:05.972445502Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.953284ms grafana | logger=migrator t=2025-06-18T15:19:05.976926288Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-18T15:19:05.978204587Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.278059ms grafana | logger=migrator t=2025-06-18T15:19:05.980811307Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-18T15:19:05.983625379Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.812412ms grafana | logger=migrator t=2025-06-18T15:19:05.988266575Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-18T15:19:05.989581996Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.315781ms grafana | logger=migrator t=2025-06-18T15:19:05.993276904Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-18T15:19:05.994850196Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.575702ms grafana | logger=migrator t=2025-06-18T15:19:05.999231461Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-18T15:19:05.999259971Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.75µs grafana | logger=migrator t=2025-06-18T15:19:06.011055062Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-18T15:19:06.011090832Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=36.601µs grafana | logger=migrator t=2025-06-18T15:19:06.028700867Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.030752713Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.051276ms grafana | logger=migrator t=2025-06-18T15:19:06.034154678Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.036143054Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.988016ms grafana | logger=migrator t=2025-06-18T15:19:06.040936511Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.043090928Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.153777ms grafana | logger=migrator t=2025-06-18T15:19:06.047541002Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.049520907Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.976915ms grafana | logger=migrator t=2025-06-18T15:19:06.052750252Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.052989233Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=240.551µs grafana | logger=migrator t=2025-06-18T15:19:06.056305088Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:06.057335507Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.030169ms grafana | logger=migrator t=2025-06-18T15:19:06.071164113Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-18T15:19:06.072179751Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.018968ms grafana | logger=migrator t=2025-06-18T15:19:06.084542035Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-18T15:19:06.084588656Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=49.921µs grafana | logger=migrator t=2025-06-18T15:19:06.091901212Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-18T15:19:06.093105751Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.206769ms grafana | logger=migrator t=2025-06-18T15:19:06.100318686Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-18T15:19:06.103057148Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.742592ms grafana | logger=migrator t=2025-06-18T15:19:06.107439292Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:06.11384732Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.412358ms grafana | logger=migrator t=2025-06-18T15:19:06.119075501Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-18T15:19:06.119621715Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=546.004µs grafana | logger=migrator t=2025-06-18T15:19:06.122145334Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-18T15:19:06.122720929Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=573.785µs grafana | logger=migrator t=2025-06-18T15:19:06.125339129Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-18T15:19:06.126106044Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=766.335µs grafana | logger=migrator t=2025-06-18T15:19:06.130613539Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:06.130929891Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=316.312µs grafana | logger=migrator t=2025-06-18T15:19:06.134465758Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:06.134988192Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=522.264µs grafana | logger=migrator t=2025-06-18T15:19:06.140228443Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-18T15:19:06.143917152Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.691769ms grafana | logger=migrator t=2025-06-18T15:19:06.153096972Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-18T15:19:06.154312461Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.218849ms grafana | logger=migrator t=2025-06-18T15:19:06.157494725Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-18T15:19:06.157690317Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=195.792µs grafana | logger=migrator t=2025-06-18T15:19:06.164296828Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-18T15:19:06.164490679Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=193.281µs grafana | logger=migrator t=2025-06-18T15:19:06.178065974Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-18T15:19:06.179289063Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.225598ms grafana | logger=migrator t=2025-06-18T15:19:06.183660556Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.185991764Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.330508ms grafana | logger=migrator t=2025-06-18T15:19:06.188655395Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.190926382Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.270117ms grafana | logger=migrator t=2025-06-18T15:19:06.193965675Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-18T15:19:06.194722131Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=755.826µs grafana | logger=migrator t=2025-06-18T15:19:06.197541883Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-18T15:19:06.19976904Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.226767ms grafana | logger=migrator t=2025-06-18T15:19:06.20625658Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-18T15:19:06.208988931Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.735141ms grafana | logger=migrator t=2025-06-18T15:19:06.212930311Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-18T15:19:06.213353334Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=422.683µs grafana | logger=migrator t=2025-06-18T15:19:06.218770005Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-18T15:19:06.221671668Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=2.901593ms grafana | logger=migrator t=2025-06-18T15:19:06.228192328Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-18T15:19:06.229027255Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=834.147µs grafana | logger=migrator t=2025-06-18T15:19:06.232984374Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-18T15:19:06.233443059Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=457.595µs grafana | logger=migrator t=2025-06-18T15:19:06.254176677Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-18T15:19:06.255856351Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.681954ms grafana | logger=migrator t=2025-06-18T15:19:06.259188916Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-18T15:19:06.260430795Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.238159ms grafana | logger=migrator t=2025-06-18T15:19:06.265536824Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-18T15:19:06.26629333Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=756.126µs grafana | logger=migrator t=2025-06-18T15:19:06.269182622Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-18T15:19:06.269943799Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=761.137µs grafana | logger=migrator t=2025-06-18T15:19:06.27275221Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-18T15:19:06.273501486Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=749.046µs grafana | logger=migrator t=2025-06-18T15:19:06.277829138Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-18T15:19:06.28440624Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.576202ms grafana | logger=migrator t=2025-06-18T15:19:06.288729752Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-18T15:19:06.289386308Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=650.416µs grafana | logger=migrator t=2025-06-18T15:19:06.299739067Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-18T15:19:06.300562904Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=824.217µs grafana | logger=migrator t=2025-06-18T15:19:06.303333145Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-18T15:19:06.304153851Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=820.036µs grafana | logger=migrator t=2025-06-18T15:19:06.317704075Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-18T15:19:06.318508851Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=804.506µs grafana | logger=migrator t=2025-06-18T15:19:06.324824839Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-18T15:19:06.327056606Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.229217ms grafana | logger=migrator t=2025-06-18T15:19:06.329906168Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-18T15:19:06.332284557Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.378159ms grafana | logger=migrator t=2025-06-18T15:19:06.335955395Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-18T15:19:06.335981055Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.34µs grafana | logger=migrator t=2025-06-18T15:19:06.340375079Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-18T15:19:06.3405665Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=191.831µs grafana | logger=migrator t=2025-06-18T15:19:06.343555303Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-18T15:19:06.346005992Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.450049ms grafana | logger=migrator t=2025-06-18T15:19:06.349732311Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-18T15:19:06.349918532Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=186.501µs grafana | logger=migrator t=2025-06-18T15:19:06.352782284Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-18T15:19:06.352999935Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=217.571µs grafana | logger=migrator t=2025-06-18T15:19:06.360401393Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-18T15:19:06.362866802Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.464839ms grafana | logger=migrator t=2025-06-18T15:19:06.365388541Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-18T15:19:06.365565542Z level=info msg="Migration successfully executed" id="Update uid value" duration=176.931µs grafana | logger=migrator t=2025-06-18T15:19:06.368750067Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:06.369588413Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=837.926µs grafana | logger=migrator t=2025-06-18T15:19:06.374922024Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-18T15:19:06.37574083Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=818.536µs grafana | logger=migrator t=2025-06-18T15:19:06.379725001Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-18T15:19:06.383773562Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.04777ms grafana | logger=migrator t=2025-06-18T15:19:06.387889884Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-18T15:19:06.390347873Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.457579ms grafana | logger=migrator t=2025-06-18T15:19:06.394424263Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-18T15:19:06.394443554Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=19.921µs grafana | logger=migrator t=2025-06-18T15:19:06.412974566Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-18T15:19:06.414533408Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.561972ms grafana | logger=migrator t=2025-06-18T15:19:06.422156527Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-18T15:19:06.423475346Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.318519ms grafana | logger=migrator t=2025-06-18T15:19:06.426718952Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-18T15:19:06.427979151Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.260009ms grafana | logger=migrator t=2025-06-18T15:19:06.433170931Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-18T15:19:06.434053958Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=883.226µs grafana | logger=migrator t=2025-06-18T15:19:06.436920269Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-18T15:19:06.437759735Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=848.026µs grafana | logger=migrator t=2025-06-18T15:19:06.44081528Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-18T15:19:06.441992338Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.176688ms grafana | logger=migrator t=2025-06-18T15:19:06.458341944Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-18T15:19:06.459668764Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.32738ms grafana | logger=migrator t=2025-06-18T15:19:06.462765467Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-18T15:19:06.471355763Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.590376ms grafana | logger=migrator t=2025-06-18T15:19:06.475523946Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-18T15:19:06.476274491Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=750.175µs grafana | logger=migrator t=2025-06-18T15:19:06.479035833Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-18T15:19:06.479821699Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=785.666µs grafana | logger=migrator t=2025-06-18T15:19:06.482603191Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-18T15:19:06.483389506Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=785.866µs grafana | logger=migrator t=2025-06-18T15:19:06.488090722Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-18T15:19:06.488944879Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=853.557µs grafana | logger=migrator t=2025-06-18T15:19:06.49166212Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:06.491991622Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=329.142µs grafana | logger=migrator t=2025-06-18T15:19:06.494763554Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-18T15:19:06.49559352Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=829.876µs grafana | logger=migrator t=2025-06-18T15:19:06.500115235Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-18T15:19:06.500139745Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.36µs grafana | logger=migrator t=2025-06-18T15:19:06.503030837Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-18T15:19:06.507527181Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.495854ms grafana | logger=migrator t=2025-06-18T15:19:06.510845117Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-18T15:19:06.513454067Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.60694ms grafana | logger=migrator t=2025-06-18T15:19:06.518028062Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-18T15:19:06.518187423Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=159.321µs grafana | logger=migrator t=2025-06-18T15:19:06.520881724Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-18T15:19:06.523436963Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.554729ms grafana | logger=migrator t=2025-06-18T15:19:06.526444926Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-18T15:19:06.529029896Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.58748ms grafana | logger=migrator t=2025-06-18T15:19:06.531901878Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-18T15:19:06.532607423Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=705.265µs grafana | logger=migrator t=2025-06-18T15:19:06.538659581Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-18T15:19:06.539206775Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=546.884µs grafana | logger=migrator t=2025-06-18T15:19:06.542937993Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-18T15:19:06.543717019Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=778.646µs grafana | logger=migrator t=2025-06-18T15:19:06.54774895Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-18T15:19:06.549122621Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.37265ms grafana | logger=migrator t=2025-06-18T15:19:06.559537091Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-18T15:19:06.560875401Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.33721ms grafana | logger=migrator t=2025-06-18T15:19:06.565794299Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-18T15:19:06.567462602Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.671493ms grafana | logger=migrator t=2025-06-18T15:19:06.585679041Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-18T15:19:06.585701131Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=23.14µs grafana | logger=migrator t=2025-06-18T15:19:06.600502845Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-18T15:19:06.600541996Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=39.831µs grafana | logger=migrator t=2025-06-18T15:19:06.635743175Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-18T15:19:06.640740644Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.996949ms grafana | logger=migrator t=2025-06-18T15:19:06.646140335Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-18T15:19:06.649038347Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.898252ms grafana | logger=migrator t=2025-06-18T15:19:06.653953966Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-18T15:19:06.653971146Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=22.48µs grafana | logger=migrator t=2025-06-18T15:19:06.656677606Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-18T15:19:06.657254231Z level=info msg="Migration successfully executed" id="create quota table v1" duration=576.495µs grafana | logger=migrator t=2025-06-18T15:19:06.660312734Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-18T15:19:06.661833505Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.519571ms grafana | logger=migrator t=2025-06-18T15:19:06.666903454Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-18T15:19:06.666941634Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=39µs grafana | logger=migrator t=2025-06-18T15:19:06.669228502Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-18T15:19:06.670100778Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=871.876µs grafana | logger=migrator t=2025-06-18T15:19:06.674467222Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-18T15:19:06.675356919Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=889.607µs grafana | logger=migrator t=2025-06-18T15:19:06.68065121Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-18T15:19:06.684816052Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.164182ms grafana | logger=migrator t=2025-06-18T15:19:06.687490253Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-18T15:19:06.687522393Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=32.81µs grafana | logger=migrator t=2025-06-18T15:19:06.705013917Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-18T15:19:06.705778362Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=861.456µs grafana | logger=migrator t=2025-06-18T15:19:06.70942901Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-18T15:19:06.717449972Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=8.020812ms grafana | logger=migrator t=2025-06-18T15:19:06.724492386Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-18T15:19:06.725186231Z level=info msg="Migration successfully executed" id="create session table" duration=693.485µs grafana | logger=migrator t=2025-06-18T15:19:06.728169855Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-18T15:19:06.728480917Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=310.613µs grafana | logger=migrator t=2025-06-18T15:19:06.731862302Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-18T15:19:06.732281206Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=418.394µs grafana | logger=migrator t=2025-06-18T15:19:06.742816137Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-18T15:19:06.743579113Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=762.816µs grafana | logger=migrator t=2025-06-18T15:19:06.746755177Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-18T15:19:06.747744425Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=988.948µs grafana | logger=migrator t=2025-06-18T15:19:06.750987409Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-18T15:19:06.75103533Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=48.511µs grafana | logger=migrator t=2025-06-18T15:19:06.755239792Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-18T15:19:06.755268393Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=28.821µs grafana | logger=migrator t=2025-06-18T15:19:06.760511893Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-18T15:19:06.764290452Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.774999ms grafana | logger=migrator t=2025-06-18T15:19:06.768447614Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-18T15:19:06.771712239Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.263815ms grafana | logger=migrator t=2025-06-18T15:19:06.775674919Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-18T15:19:06.775940121Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=264.242µs grafana | logger=migrator t=2025-06-18T15:19:06.782251669Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-18T15:19:06.78234034Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=87.071µs grafana | logger=migrator t=2025-06-18T15:19:06.785288623Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-18T15:19:06.78614135Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=852.007µs grafana | logger=migrator t=2025-06-18T15:19:06.8005161Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-18T15:19:06.80057959Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=65.68µs grafana | logger=migrator t=2025-06-18T15:19:06.807467423Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-18T15:19:06.811662855Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.196672ms grafana | logger=migrator t=2025-06-18T15:19:06.814212615Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-18T15:19:06.814320876Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=108.331µs grafana | logger=migrator t=2025-06-18T15:19:06.816243631Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-18T15:19:06.818686829Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.442688ms grafana | logger=migrator t=2025-06-18T15:19:06.825613213Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-18T15:19:06.829109429Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.496037ms grafana | logger=migrator t=2025-06-18T15:19:06.839573069Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-18T15:19:06.83959545Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=22.961µs grafana | logger=migrator t=2025-06-18T15:19:06.844206406Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-18T15:19:06.845252243Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.048227ms grafana | logger=migrator t=2025-06-18T15:19:06.848682349Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-18T15:19:06.850155771Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.472902ms grafana | logger=migrator t=2025-06-18T15:19:06.855318141Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-18T15:19:06.856305948Z level=info msg="Migration successfully executed" id="create alert table v1" duration=987.187µs grafana | logger=migrator t=2025-06-18T15:19:06.859323041Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-18T15:19:06.860113727Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=787.666µs grafana | logger=migrator t=2025-06-18T15:19:06.866302494Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-18T15:19:06.867163851Z level=info msg="Migration successfully executed" id="add index alert state" duration=861.617µs grafana | logger=migrator t=2025-06-18T15:19:06.870074674Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-18T15:19:06.871414033Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.337899ms grafana | logger=migrator t=2025-06-18T15:19:06.891988592Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-18T15:19:06.893884296Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.897974ms grafana | logger=migrator t=2025-06-18T15:19:06.897387293Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-18T15:19:06.898806454Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.418671ms grafana | logger=migrator t=2025-06-18T15:19:06.903909113Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-18T15:19:06.904737429Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=832.876µs grafana | logger=migrator t=2025-06-18T15:19:06.907859873Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-18T15:19:06.918357944Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.495211ms grafana | logger=migrator t=2025-06-18T15:19:06.921864821Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-18T15:19:06.922571837Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=704.356µs grafana | logger=migrator t=2025-06-18T15:19:06.927570494Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-18T15:19:06.929444279Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.874315ms grafana | logger=migrator t=2025-06-18T15:19:06.935696177Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:06.93615442Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=458.283µs grafana | logger=migrator t=2025-06-18T15:19:06.940206292Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-18T15:19:06.940752406Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=545.524µs grafana | logger=migrator t=2025-06-18T15:19:06.946966523Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-18T15:19:06.948910848Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.944015ms grafana | logger=migrator t=2025-06-18T15:19:06.953874517Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-18T15:19:06.957765626Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.891769ms grafana | logger=migrator t=2025-06-18T15:19:06.98686436Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-18T15:19:06.994613489Z level=info msg="Migration successfully executed" id="Add column frequency" duration=7.748339ms grafana | logger=migrator t=2025-06-18T15:19:06.999688008Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-18T15:19:07.003536567Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.847039ms grafana | logger=migrator t=2025-06-18T15:19:07.006677504Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-18T15:19:07.010515976Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.837822ms grafana | logger=migrator t=2025-06-18T15:19:07.013462871Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-18T15:19:07.0145708Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.107459ms grafana | logger=migrator t=2025-06-18T15:19:07.019753423Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-18T15:19:07.019781814Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.091µs grafana | logger=migrator t=2025-06-18T15:19:07.029933449Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-18T15:19:07.02995926Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.691µs grafana | logger=migrator t=2025-06-18T15:19:07.033312928Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-18T15:19:07.034518349Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.205311ms grafana | logger=migrator t=2025-06-18T15:19:07.039886403Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-18T15:19:07.041093754Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.206851ms grafana | logger=migrator t=2025-06-18T15:19:07.044270861Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-18T15:19:07.044997537Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=726.485µs grafana | logger=migrator t=2025-06-18T15:19:07.048125513Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-18T15:19:07.049051851Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=925.878µs grafana | logger=migrator t=2025-06-18T15:19:07.056128401Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-18T15:19:07.05712886Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=999.809µs grafana | logger=migrator t=2025-06-18T15:19:07.060000484Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-18T15:19:07.064149538Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.148324ms grafana | logger=migrator t=2025-06-18T15:19:07.067324925Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-18T15:19:07.072175156Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.849901ms grafana | logger=migrator t=2025-06-18T15:19:07.076980897Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-18T15:19:07.077158459Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=177.132µs grafana | logger=migrator t=2025-06-18T15:19:07.080290215Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:07.081241813Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=951.078µs grafana | logger=migrator t=2025-06-18T15:19:07.084401329Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-18T15:19:07.085877792Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.476123ms grafana | logger=migrator t=2025-06-18T15:19:07.090456461Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-18T15:19:07.094479635Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.023735ms grafana | logger=migrator t=2025-06-18T15:19:07.098512879Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-18T15:19:07.098563639Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=51.5µs grafana | logger=migrator t=2025-06-18T15:19:07.101087131Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-18T15:19:07.101804107Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=716.586µs grafana | logger=migrator t=2025-06-18T15:19:07.107238772Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-18T15:19:07.10814919Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=910.278µs grafana | logger=migrator t=2025-06-18T15:19:07.112476117Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-18T15:19:07.112580877Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=104.99µs grafana | logger=migrator t=2025-06-18T15:19:07.117517569Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-18T15:19:07.119181824Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.663805ms grafana | logger=migrator t=2025-06-18T15:19:07.138077323Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-18T15:19:07.140821236Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=2.748843ms grafana | logger=migrator t=2025-06-18T15:19:07.14473434Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-18T15:19:07.145655047Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=926.008µs grafana | logger=migrator t=2025-06-18T15:19:07.148809374Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-18T15:19:07.149684952Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=874.758µs grafana | logger=migrator t=2025-06-18T15:19:07.154387051Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-18T15:19:07.15537069Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=982.849µs grafana | logger=migrator t=2025-06-18T15:19:07.15902155Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-18T15:19:07.160543994Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.521684ms grafana | logger=migrator t=2025-06-18T15:19:07.170221305Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-18T15:19:07.170345766Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=125.131µs grafana | logger=migrator t=2025-06-18T15:19:07.176358236Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.180661333Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.300297ms grafana | logger=migrator t=2025-06-18T15:19:07.184184943Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-18T15:19:07.184917099Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=734.946µs grafana | logger=migrator t=2025-06-18T15:19:07.188147136Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.191359744Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.211478ms grafana | logger=migrator t=2025-06-18T15:19:07.195568209Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-18T15:19:07.196210514Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=642.265µs grafana | logger=migrator t=2025-06-18T15:19:07.199692984Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-18T15:19:07.200560482Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=868.308µs grafana | logger=migrator t=2025-06-18T15:19:07.203728038Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-18T15:19:07.204492694Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=764.346µs grafana | logger=migrator t=2025-06-18T15:19:07.212686914Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-18T15:19:07.224922547Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.235113ms grafana | logger=migrator t=2025-06-18T15:19:07.32458353Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-18T15:19:07.326229844Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.645764ms grafana | logger=migrator t=2025-06-18T15:19:07.404353484Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-18T15:19:07.406603812Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=2.252018ms grafana | logger=migrator t=2025-06-18T15:19:07.511956893Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-18T15:19:07.512728839Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=774.346µs grafana | logger=migrator t=2025-06-18T15:19:07.566877017Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-18T15:19:07.568187008Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.311781ms grafana | logger=migrator t=2025-06-18T15:19:07.593473622Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-18T15:19:07.593868855Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=395.993µs grafana | logger=migrator t=2025-06-18T15:19:07.597722068Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.602268526Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.545858ms grafana | logger=migrator t=2025-06-18T15:19:07.616687498Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.621111735Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.418897ms grafana | logger=migrator t=2025-06-18T15:19:07.624437543Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.625445662Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.007559ms grafana | logger=migrator t=2025-06-18T15:19:07.629415346Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.630896187Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.480021ms grafana | logger=migrator t=2025-06-18T15:19:07.633994754Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-18T15:19:07.634242416Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=247.772µs grafana | logger=migrator t=2025-06-18T15:19:07.63703356Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-18T15:19:07.641657068Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.624908ms grafana | logger=migrator t=2025-06-18T15:19:07.645481181Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-18T15:19:07.646598541Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.1176ms grafana | logger=migrator t=2025-06-18T15:19:07.649662226Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-18T15:19:07.649875748Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=214.092µs grafana | logger=migrator t=2025-06-18T15:19:07.653226906Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-18T15:19:07.654177945Z level=info msg="Migration successfully executed" id="Move region to single row" duration=951.559µs grafana | logger=migrator t=2025-06-18T15:19:07.657576554Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.658983765Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.407581ms grafana | logger=migrator t=2025-06-18T15:19:07.663217861Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.66429738Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.080699ms grafana | logger=migrator t=2025-06-18T15:19:07.667433206Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.668801359Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.367773ms grafana | logger=migrator t=2025-06-18T15:19:07.672814012Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.67374244Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=928.278µs grafana | logger=migrator t=2025-06-18T15:19:07.676715625Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.677641183Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=925.068µs grafana | logger=migrator t=2025-06-18T15:19:07.685816852Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-18T15:19:07.687232644Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.419722ms grafana | logger=migrator t=2025-06-18T15:19:07.691169217Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-18T15:19:07.691202438Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=35.661µs grafana | logger=migrator t=2025-06-18T15:19:07.694947769Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-18T15:19:07.69501966Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=73.461µs grafana | logger=migrator t=2025-06-18T15:19:07.698402439Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-18T15:19:07.698427509Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=26.65µs grafana | logger=migrator t=2025-06-18T15:19:07.70104418Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-18T15:19:07.7021442Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.09978ms grafana | logger=migrator t=2025-06-18T15:19:07.706124873Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-18T15:19:07.707196703Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.07042ms grafana | logger=migrator t=2025-06-18T15:19:07.7103968Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-18T15:19:07.71162919Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.23276ms grafana | logger=migrator t=2025-06-18T15:19:07.714903747Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-18T15:19:07.716040127Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.13751ms grafana | logger=migrator t=2025-06-18T15:19:07.719829469Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-18T15:19:07.720082961Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=249.702µs grafana | logger=migrator t=2025-06-18T15:19:07.72460983Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-18T15:19:07.725080034Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=470.284µs grafana | logger=migrator t=2025-06-18T15:19:07.728431912Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-18T15:19:07.728454192Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=22.99µs grafana | logger=migrator t=2025-06-18T15:19:07.732305035Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-18T15:19:07.737575189Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=5.267704ms grafana | logger=migrator t=2025-06-18T15:19:07.742873664Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-18T15:19:07.744210876Z level=info msg="Migration successfully executed" id="create team table" duration=1.338902ms grafana | logger=migrator t=2025-06-18T15:19:07.747641204Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-18T15:19:07.748732143Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.091029ms grafana | logger=migrator t=2025-06-18T15:19:07.763350068Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-18T15:19:07.764452426Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.103929ms grafana | logger=migrator t=2025-06-18T15:19:07.768288569Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-18T15:19:07.77422007Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.930831ms grafana | logger=migrator t=2025-06-18T15:19:07.777146444Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-18T15:19:07.777312906Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=166.492µs grafana | logger=migrator t=2025-06-18T15:19:07.780377822Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:07.781224958Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=846.866µs grafana | logger=migrator t=2025-06-18T15:19:07.788447479Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-18T15:19:07.794210688Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.762229ms grafana | logger=migrator t=2025-06-18T15:19:07.797428965Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-18T15:19:07.802254036Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.820561ms grafana | logger=migrator t=2025-06-18T15:19:07.806997026Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-18T15:19:07.810588977Z level=info msg="Migration successfully executed" id="create team member table" duration=3.592161ms grafana | logger=migrator t=2025-06-18T15:19:07.81574122Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-18T15:19:07.817056711Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.315381ms grafana | logger=migrator t=2025-06-18T15:19:07.823101242Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-18T15:19:07.82406062Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=958.878µs grafana | logger=migrator t=2025-06-18T15:19:07.827206086Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-18T15:19:07.828094284Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=887.748µs grafana | logger=migrator t=2025-06-18T15:19:07.834354627Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-18T15:19:07.839297349Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.942432ms grafana | logger=migrator t=2025-06-18T15:19:07.842365195Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-18T15:19:07.848174614Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.808869ms grafana | logger=migrator t=2025-06-18T15:19:07.852252768Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-18T15:19:07.857087779Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.834351ms grafana | logger=migrator t=2025-06-18T15:19:07.860836721Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-18T15:19:07.861700789Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=863.588µs grafana | logger=migrator t=2025-06-18T15:19:07.864679893Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-18T15:19:07.86548407Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=803.887µs grafana | logger=migrator t=2025-06-18T15:19:07.870007509Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-18T15:19:07.871218018Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.210509ms grafana | logger=migrator t=2025-06-18T15:19:07.876679755Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-18T15:19:07.877601052Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=920.967µs grafana | logger=migrator t=2025-06-18T15:19:07.882787516Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-18T15:19:07.884448721Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.663305ms grafana | logger=migrator t=2025-06-18T15:19:07.888897369Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-18T15:19:07.890354071Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.456462ms grafana | logger=migrator t=2025-06-18T15:19:07.900091003Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-18T15:19:07.911671521Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=11.578708ms grafana | logger=migrator t=2025-06-18T15:19:07.915750625Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-18T15:19:07.921513024Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=5.756019ms grafana | logger=migrator t=2025-06-18T15:19:07.925123614Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-18T15:19:07.926349645Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.228971ms grafana | logger=migrator t=2025-06-18T15:19:07.931112756Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-18T15:19:07.931596869Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=484.173µs grafana | logger=migrator t=2025-06-18T15:19:07.93886753Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-18T15:19:07.939359305Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=493.905µs grafana | logger=migrator t=2025-06-18T15:19:07.947433902Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-18T15:19:07.948953696Z level=info msg="Migration successfully executed" id="create tag table" duration=1.521724ms grafana | logger=migrator t=2025-06-18T15:19:07.958152453Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-18T15:19:07.959544716Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.392923ms grafana | logger=migrator t=2025-06-18T15:19:07.969495089Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-18T15:19:07.970231816Z level=info msg="Migration successfully executed" id="create login attempt table" duration=738.517µs grafana | logger=migrator t=2025-06-18T15:19:07.973297742Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-18T15:19:07.97426816Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=970.288µs grafana | logger=migrator t=2025-06-18T15:19:07.978475194Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-18T15:19:07.979634545Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.159461ms grafana | logger=migrator t=2025-06-18T15:19:07.985618165Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:07.999891056Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.269301ms grafana | logger=migrator t=2025-06-18T15:19:08.006184858Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-18T15:19:08.006864805Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=680.327µs grafana | logger=migrator t=2025-06-18T15:19:08.010473564Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-18T15:19:08.012009706Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.534492ms grafana | logger=migrator t=2025-06-18T15:19:08.016932515Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:08.017398459Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=466.144µs grafana | logger=migrator t=2025-06-18T15:19:08.02114697Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:08.022031247Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=887.127µs grafana | logger=migrator t=2025-06-18T15:19:08.025172322Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-18T15:19:08.025999519Z level=info msg="Migration successfully executed" id="create user auth table" duration=827.007µs grafana | logger=migrator t=2025-06-18T15:19:08.043472941Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-18T15:19:08.044225756Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=753.345µs grafana | logger=migrator t=2025-06-18T15:19:08.050149324Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-18T15:19:08.050197345Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=51.841µs grafana | logger=migrator t=2025-06-18T15:19:08.054277168Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.061367915Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.091487ms grafana | logger=migrator t=2025-06-18T15:19:08.067475264Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.07315149Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.677996ms grafana | logger=migrator t=2025-06-18T15:19:08.076661129Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.083243302Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.580903ms grafana | logger=migrator t=2025-06-18T15:19:08.088128972Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.094367272Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=6.23754ms grafana | logger=migrator t=2025-06-18T15:19:08.104303482Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.105283181Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=983.149µs grafana | logger=migrator t=2025-06-18T15:19:08.111358289Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.117107426Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.748707ms grafana | logger=migrator t=2025-06-18T15:19:08.122686701Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-18T15:19:08.128586579Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.893628ms grafana | logger=migrator t=2025-06-18T15:19:08.132389119Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-18T15:19:08.133326887Z level=info msg="Migration successfully executed" id="create server_lock table" duration=938.128µs grafana | logger=migrator t=2025-06-18T15:19:08.138402898Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-18T15:19:08.139436276Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.033318ms grafana | logger=migrator t=2025-06-18T15:19:08.145101832Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-18T15:19:08.146263401Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.162079ms grafana | logger=migrator t=2025-06-18T15:19:08.153265308Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-18T15:19:08.154439508Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.174799ms grafana | logger=migrator t=2025-06-18T15:19:08.159428218Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-18T15:19:08.160440476Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.012218ms grafana | logger=migrator t=2025-06-18T15:19:08.167730325Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-18T15:19:08.169686441Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.958996ms grafana | logger=migrator t=2025-06-18T15:19:08.1733133Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-18T15:19:08.182429804Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.115124ms grafana | logger=migrator t=2025-06-18T15:19:08.195709111Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-18T15:19:08.197407635Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.699004ms grafana | logger=migrator t=2025-06-18T15:19:08.203461184Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-18T15:19:08.209678374Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=6.22155ms grafana | logger=migrator t=2025-06-18T15:19:08.21291334Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-18T15:19:08.213909339Z level=info msg="Migration successfully executed" id="create cache_data table" duration=995.719µs grafana | logger=migrator t=2025-06-18T15:19:08.217012944Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-18T15:19:08.218064592Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.051608ms grafana | logger=migrator t=2025-06-18T15:19:08.222819531Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-18T15:19:08.224178221Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.35808ms grafana | logger=migrator t=2025-06-18T15:19:08.22773047Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-18T15:19:08.229297523Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.566833ms grafana | logger=migrator t=2025-06-18T15:19:08.23271741Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-18T15:19:08.23273575Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=19.47µs grafana | logger=migrator t=2025-06-18T15:19:08.243534628Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-18T15:19:08.24378601Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=250.632µs grafana | logger=migrator t=2025-06-18T15:19:08.247155867Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-18T15:19:08.248628639Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.472022ms grafana | logger=migrator t=2025-06-18T15:19:08.251838355Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T15:19:08.252935124Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.096349ms grafana | logger=migrator t=2025-06-18T15:19:08.25600405Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T15:19:08.257132738Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.128208ms grafana | logger=migrator t=2025-06-18T15:19:08.261688045Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T15:19:08.261708355Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=21.12µs grafana | logger=migrator t=2025-06-18T15:19:08.264554688Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T15:19:08.265602116Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.047218ms grafana | logger=migrator t=2025-06-18T15:19:08.26847996Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T15:19:08.269948602Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.472442ms grafana | logger=migrator t=2025-06-18T15:19:08.27588759Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-18T15:19:08.27722052Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.34076ms grafana | logger=migrator t=2025-06-18T15:19:08.280516097Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-18T15:19:08.281765738Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.266331ms grafana | logger=migrator t=2025-06-18T15:19:08.286033001Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-18T15:19:08.292615745Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.582334ms grafana | logger=migrator t=2025-06-18T15:19:08.297029401Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-18T15:19:08.297790816Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=761.125µs grafana | logger=migrator t=2025-06-18T15:19:08.30187088Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-18T15:19:08.301965881Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=94.611µs grafana | logger=migrator t=2025-06-18T15:19:08.306198874Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-18T15:19:08.307844588Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.644664ms grafana | logger=migrator t=2025-06-18T15:19:08.310932843Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-18T15:19:08.311975492Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.042119ms grafana | logger=migrator t=2025-06-18T15:19:08.315052677Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-18T15:19:08.316069665Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.016458ms grafana | logger=migrator t=2025-06-18T15:19:08.32043953Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T15:19:08.320460221Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=21.231µs grafana | logger=migrator t=2025-06-18T15:19:08.323476955Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-18T15:19:08.324425862Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=947.877µs grafana | logger=migrator t=2025-06-18T15:19:08.329131741Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-18T15:19:08.330992545Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.860844ms grafana | logger=migrator t=2025-06-18T15:19:08.339828216Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-18T15:19:08.341089737Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.268301ms grafana | logger=migrator t=2025-06-18T15:19:08.344342403Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-18T15:19:08.346162688Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.819395ms grafana | logger=migrator t=2025-06-18T15:19:08.350992967Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.357014036Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.020369ms grafana | logger=migrator t=2025-06-18T15:19:08.35995797Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.361035128Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.076918ms grafana | logger=migrator t=2025-06-18T15:19:08.364494666Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.365469774Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=974.828µs grafana | logger=migrator t=2025-06-18T15:19:08.376051079Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.404189237Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.132127ms grafana | logger=migrator t=2025-06-18T15:19:08.407174362Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.434134159Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.956757ms grafana | logger=migrator t=2025-06-18T15:19:08.438429684Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.439423302Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=993.068µs grafana | logger=migrator t=2025-06-18T15:19:08.443024931Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.444936057Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.913156ms grafana | logger=migrator t=2025-06-18T15:19:08.448742297Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-18T15:19:08.455380891Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.639034ms grafana | logger=migrator t=2025-06-18T15:19:08.459648985Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-18T15:19:08.465529093Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.876688ms grafana | logger=migrator t=2025-06-18T15:19:08.469882279Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:08.470937867Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.055218ms grafana | logger=migrator t=2025-06-18T15:19:08.481851005Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-18T15:19:08.483591839Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.740214ms grafana | logger=migrator t=2025-06-18T15:19:08.488373008Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-18T15:19:08.489434017Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.061059ms grafana | logger=migrator t=2025-06-18T15:19:08.492962275Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-18T15:19:08.494254746Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.290871ms grafana | logger=migrator t=2025-06-18T15:19:08.499204295Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T15:19:08.499236015Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=33µs grafana | logger=migrator t=2025-06-18T15:19:08.504119625Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.510564167Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.444442ms grafana | logger=migrator t=2025-06-18T15:19:08.517497994Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.526836698Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.338334ms grafana | logger=migrator t=2025-06-18T15:19:08.530130025Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.534625492Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.495077ms grafana | logger=migrator t=2025-06-18T15:19:08.53818355Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-18T15:19:08.539109639Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=924.868µs grafana | logger=migrator t=2025-06-18T15:19:08.543312452Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-18T15:19:08.544531243Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.21722ms grafana | logger=migrator t=2025-06-18T15:19:08.547837469Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.553952348Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.114069ms grafana | logger=migrator t=2025-06-18T15:19:08.558240502Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.564322892Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.08183ms grafana | logger=migrator t=2025-06-18T15:19:08.567448208Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-18T15:19:08.568543536Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.097848ms grafana | logger=migrator t=2025-06-18T15:19:08.572090515Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:08.578470997Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.377232ms grafana | logger=migrator t=2025-06-18T15:19:08.582450478Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:08.588581088Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.13023ms grafana | logger=migrator t=2025-06-18T15:19:08.591940095Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:08.591959155Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=19.97µs grafana | logger=migrator t=2025-06-18T15:19:08.595077381Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:08.596134899Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.057028ms grafana | logger=migrator t=2025-06-18T15:19:08.601511143Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-18T15:19:08.603150095Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.638892ms grafana | logger=migrator t=2025-06-18T15:19:08.608464118Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-18T15:19:08.609550438Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.08609ms grafana | logger=migrator t=2025-06-18T15:19:08.612916005Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-18T15:19:08.612934765Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=19.78µs grafana | logger=migrator t=2025-06-18T15:19:08.622897776Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:08.633574392Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.674146ms grafana | logger=migrator t=2025-06-18T15:19:08.639256698Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:08.644017756Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.760028ms grafana | logger=migrator t=2025-06-18T15:19:08.652329293Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:08.660416509Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.086366ms grafana | logger=migrator t=2025-06-18T15:19:08.663705326Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:08.668280983Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.575127ms grafana | logger=migrator t=2025-06-18T15:19:08.672174184Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-18T15:19:08.67909812Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.922646ms grafana | logger=migrator t=2025-06-18T15:19:08.682086705Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:08.682104905Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=19.03µs grafana | logger=migrator t=2025-06-18T15:19:08.684959588Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-18T15:19:08.685855425Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=895.337µs grafana | logger=migrator t=2025-06-18T15:19:08.692831291Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.701554562Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.720931ms grafana | logger=migrator t=2025-06-18T15:19:08.704805478Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-18T15:19:08.704819228Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=14.36µs grafana | logger=migrator t=2025-06-18T15:19:08.70874552Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.717058787Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.311977ms grafana | logger=migrator t=2025-06-18T15:19:08.720338953Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-18T15:19:08.72116916Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=829.677µs grafana | logger=migrator t=2025-06-18T15:19:08.727521692Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.735359675Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.838023ms grafana | logger=migrator t=2025-06-18T15:19:08.738530391Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-18T15:19:08.739244306Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=713.495µs grafana | logger=migrator t=2025-06-18T15:19:08.743656172Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-18T15:19:08.744779401Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.122639ms grafana | logger=migrator t=2025-06-18T15:19:08.753178239Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.765730261Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=12.548932ms grafana | logger=migrator t=2025-06-18T15:19:08.772627816Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-18T15:19:08.773992127Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.364521ms grafana | logger=migrator t=2025-06-18T15:19:08.777491766Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-18T15:19:08.778594834Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.102418ms grafana | logger=migrator t=2025-06-18T15:19:08.79042373Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-18T15:19:08.791448449Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.028169ms grafana | logger=migrator t=2025-06-18T15:19:08.795527881Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-18T15:19:08.796695481Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.16675ms grafana | logger=migrator t=2025-06-18T15:19:08.799419333Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-18T15:19:08.799435763Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=17.34µs grafana | logger=migrator t=2025-06-18T15:19:08.805346841Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-18T15:19:08.806364629Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.018218ms grafana | logger=migrator t=2025-06-18T15:19:08.809194642Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.81020669Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.011968ms grafana | logger=migrator t=2025-06-18T15:19:08.813025303Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-18T15:19:08.813419017Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-18T15:19:08.817816701Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-18T15:19:08.818210124Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=393.353µs grafana | logger=migrator t=2025-06-18T15:19:08.821803304Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-18T15:19:08.822860792Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.056948ms grafana | logger=migrator t=2025-06-18T15:19:08.825653725Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-18T15:19:08.83254453Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.888475ms grafana | logger=migrator t=2025-06-18T15:19:08.837158088Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-18T15:19:08.838240666Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.081878ms grafana | logger=migrator t=2025-06-18T15:19:08.84121018Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-18T15:19:08.84237278Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.16244ms grafana | logger=migrator t=2025-06-18T15:19:08.846751425Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-18T15:19:08.847762134Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.012519ms grafana | logger=migrator t=2025-06-18T15:19:08.852358931Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-18T15:19:08.853613301Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.25436ms grafana | logger=migrator t=2025-06-18T15:19:08.857598613Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:08.859164026Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.559873ms grafana | logger=migrator t=2025-06-18T15:19:08.862581394Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-18T15:19:08.862622754Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=42.11µs grafana | logger=migrator t=2025-06-18T15:19:08.870965611Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-18T15:19:08.870995221Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=32.23µs grafana | logger=migrator t=2025-06-18T15:19:08.873655693Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-18T15:19:08.884629092Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.965459ms grafana | logger=migrator t=2025-06-18T15:19:08.88812863Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-18T15:19:08.888608144Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=481.274µs grafana | logger=migrator t=2025-06-18T15:19:08.892899139Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-18T15:19:08.894627613Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.730674ms grafana | logger=migrator t=2025-06-18T15:19:08.903666006Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-18T15:19:08.90412373Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=460.704µs grafana | logger=migrator t=2025-06-18T15:19:08.907312765Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-18T15:19:08.908746768Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.428672ms grafana | logger=migrator t=2025-06-18T15:19:08.911912173Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-18T15:19:08.913224753Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.30921ms grafana | logger=migrator t=2025-06-18T15:19:08.923153004Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-18T15:19:08.97094609Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=47.783576ms grafana | logger=migrator t=2025-06-18T15:19:08.978347189Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-18T15:19:08.98581902Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.472391ms grafana | logger=migrator t=2025-06-18T15:19:08.99079609Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-18T15:19:08.990946592Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=151.202µs grafana | logger=migrator t=2025-06-18T15:19:08.994716212Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-18T15:19:09.028904217Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.181665ms grafana | logger=migrator t=2025-06-18T15:19:09.032056313Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-18T15:19:09.062364815Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=30.303882ms grafana | logger=migrator t=2025-06-18T15:19:09.068228152Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-18T15:19:09.069306561Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.080039ms grafana | logger=migrator t=2025-06-18T15:19:09.073463624Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-18T15:19:09.074584804Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.1207ms grafana | logger=migrator t=2025-06-18T15:19:09.078684216Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-18T15:19:09.078894838Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=210.242µs grafana | logger=migrator t=2025-06-18T15:19:09.083894828Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-18T15:19:09.085231759Z level=info msg="Migration successfully executed" id="create permission table" duration=1.336561ms grafana | logger=migrator t=2025-06-18T15:19:09.088641876Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-18T15:19:09.091079236Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.43777ms grafana | logger=migrator t=2025-06-18T15:19:09.094774336Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-18T15:19:09.095900254Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.125558ms grafana | logger=migrator t=2025-06-18T15:19:09.101396788Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-18T15:19:09.102456247Z level=info msg="Migration successfully executed" id="create role table" duration=1.062019ms grafana | logger=migrator t=2025-06-18T15:19:09.105854354Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-18T15:19:09.114382303Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.523729ms grafana | logger=migrator t=2025-06-18T15:19:09.117736349Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-18T15:19:09.123288364Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.525505ms grafana | logger=migrator t=2025-06-18T15:19:09.12767531Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-18T15:19:09.128866129Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.191289ms grafana | logger=migrator t=2025-06-18T15:19:09.132189885Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-18T15:19:09.133865879Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.674534ms grafana | logger=migrator t=2025-06-18T15:19:09.13896154Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:09.140910305Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.948975ms grafana | logger=migrator t=2025-06-18T15:19:09.145707664Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-18T15:19:09.146751173Z level=info msg="Migration successfully executed" id="create team role table" duration=1.043709ms grafana | logger=migrator t=2025-06-18T15:19:09.153008913Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-18T15:19:09.154365063Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.35735ms grafana | logger=migrator t=2025-06-18T15:19:09.157985833Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-18T15:19:09.16030199Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.315737ms grafana | logger=migrator t=2025-06-18T15:19:09.164694606Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-18T15:19:09.166051307Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.357551ms grafana | logger=migrator t=2025-06-18T15:19:09.169926388Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-18T15:19:09.171114747Z level=info msg="Migration successfully executed" id="create user role table" duration=1.186239ms grafana | logger=migrator t=2025-06-18T15:19:09.184215123Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-18T15:19:09.185673054Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.459841ms grafana | logger=migrator t=2025-06-18T15:19:09.202194037Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-18T15:19:09.204361784Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.170018ms grafana | logger=migrator t=2025-06-18T15:19:09.208387037Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-18T15:19:09.212140627Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=3.75425ms grafana | logger=migrator t=2025-06-18T15:19:09.215750365Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-18T15:19:09.216742724Z level=info msg="Migration successfully executed" id="create builtin role table" duration=992.319µs grafana | logger=migrator t=2025-06-18T15:19:09.21990503Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-18T15:19:09.221048018Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.142729ms grafana | logger=migrator t=2025-06-18T15:19:09.225334343Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-18T15:19:09.226441351Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.106838ms grafana | logger=migrator t=2025-06-18T15:19:09.230592314Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-18T15:19:09.239528787Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.935473ms grafana | logger=migrator t=2025-06-18T15:19:09.242844243Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-18T15:19:09.24372487Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=880.317µs grafana | logger=migrator t=2025-06-18T15:19:09.248384358Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-18T15:19:09.250190412Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.804154ms grafana | logger=migrator t=2025-06-18T15:19:09.253904122Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:09.255710376Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.805884ms grafana | logger=migrator t=2025-06-18T15:19:09.259122604Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-18T15:19:09.260551955Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.429501ms grafana | logger=migrator t=2025-06-18T15:19:09.2648363Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-18T15:19:09.265660726Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=821.006µs grafana | logger=migrator t=2025-06-18T15:19:09.26861202Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-18T15:19:09.269835069Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.222829ms grafana | logger=migrator t=2025-06-18T15:19:09.272891914Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-18T15:19:09.280935008Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.041994ms grafana | logger=migrator t=2025-06-18T15:19:09.284935621Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-18T15:19:09.293316218Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.379937ms grafana | logger=migrator t=2025-06-18T15:19:09.296730865Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-18T15:19:09.303170057Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.441242ms grafana | logger=migrator t=2025-06-18T15:19:09.308032416Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-18T15:19:09.31602008Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.987764ms grafana | logger=migrator t=2025-06-18T15:19:09.32481041Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-18T15:19:09.326479874Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.669134ms grafana | logger=migrator t=2025-06-18T15:19:09.352947386Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-18T15:19:09.354277547Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.332351ms grafana | logger=migrator t=2025-06-18T15:19:09.361438864Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-18T15:19:09.362500983Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.056899ms grafana | logger=migrator t=2025-06-18T15:19:09.365949681Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-18T15:19:09.37593416Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=9.976249ms grafana | logger=migrator t=2025-06-18T15:19:09.381951018Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-18T15:19:09.384043165Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=2.090057ms grafana | logger=migrator t=2025-06-18T15:19:09.388981835Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-18T15:19:09.390337335Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.35494ms grafana | logger=migrator t=2025-06-18T15:19:09.393497532Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-18T15:19:09.3945357Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.037198ms grafana | logger=migrator t=2025-06-18T15:19:09.399234077Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-18T15:19:09.400217515Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=982.878µs grafana | logger=migrator t=2025-06-18T15:19:09.403183199Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-18T15:19:09.403208139Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=21.28µs grafana | logger=migrator t=2025-06-18T15:19:09.405466528Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-18T15:19:09.406354694Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=887.906µs grafana | logger=migrator t=2025-06-18T15:19:09.410678459Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-18T15:19:09.41073656Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=58.441µs grafana | logger=migrator t=2025-06-18T15:19:09.413195729Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-18T15:19:09.414098086Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=902.287µs grafana | logger=migrator t=2025-06-18T15:19:09.417533044Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-18T15:19:09.418566523Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.0552ms grafana | logger=migrator t=2025-06-18T15:19:09.422573435Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-18T15:19:09.423187759Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=613.885µs grafana | logger=migrator t=2025-06-18T15:19:09.426468995Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-18T15:19:09.426687857Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=218.902µs grafana | logger=migrator t=2025-06-18T15:19:09.430996682Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-18T15:19:09.431511086Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=514.384µs grafana | logger=migrator t=2025-06-18T15:19:09.434052917Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-18T15:19:09.435147885Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.094238ms grafana | logger=migrator t=2025-06-18T15:19:09.439266229Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-18T15:19:09.441285975Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.021436ms grafana | logger=migrator t=2025-06-18T15:19:09.445411238Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-18T15:19:09.455183916Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.773838ms grafana | logger=migrator t=2025-06-18T15:19:09.470943752Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-18T15:19:09.471015343Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=74.95µs grafana | logger=migrator t=2025-06-18T15:19:09.476142444Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-18T15:19:09.477334394Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.1879ms grafana | logger=migrator t=2025-06-18T15:19:09.481056254Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-18T15:19:09.482166092Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.109348ms grafana | logger=migrator t=2025-06-18T15:19:09.493563053Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-18T15:19:09.494903775Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.341392ms grafana | logger=migrator t=2025-06-18T15:19:09.499856094Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-18T15:19:09.512446145Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.589511ms grafana | logger=migrator t=2025-06-18T15:19:09.515781552Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.517608317Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.826385ms grafana | logger=migrator t=2025-06-18T15:19:09.523260312Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.525031387Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.753404ms grafana | logger=migrator t=2025-06-18T15:19:09.530386979Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:09.555861694Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.468585ms grafana | logger=migrator t=2025-06-18T15:19:09.559407302Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-18T15:19:09.56041257Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.004848ms grafana | logger=migrator t=2025-06-18T15:19:09.564820016Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.566094015Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.27405ms grafana | logger=migrator t=2025-06-18T15:19:09.570201809Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.571569329Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.36871ms grafana | logger=migrator t=2025-06-18T15:19:09.574947826Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-18T15:19:09.576409448Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.463332ms grafana | logger=migrator t=2025-06-18T15:19:09.579865966Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:09.580171738Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=304.202µs grafana | logger=migrator t=2025-06-18T15:19:09.586155506Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:09.588046312Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.885615ms grafana | logger=migrator t=2025-06-18T15:19:09.593133952Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-18T15:19:09.602999311Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.864009ms grafana | logger=migrator t=2025-06-18T15:19:09.611476929Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-18T15:19:09.624926377Z level=info msg="Migration successfully executed" id="add type column" duration=13.446538ms grafana | logger=migrator t=2025-06-18T15:19:09.630410611Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-18T15:19:09.631556551Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.14779ms grafana | logger=migrator t=2025-06-18T15:19:09.635129339Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-18T15:19:09.636438609Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.30932ms grafana | logger=migrator t=2025-06-18T15:19:09.643700408Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.644424544Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.649640436Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.6501672Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.65389573Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-18T15:19:09.655455192Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.563122ms grafana | logger=migrator t=2025-06-18T15:19:09.658829809Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-18T15:19:09.66007721Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.24752ms grafana | logger=migrator t=2025-06-18T15:19:09.664171812Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.665438103Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.27786ms grafana | logger=migrator t=2025-06-18T15:19:09.67135179Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-18T15:19:09.6726458Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.29287ms grafana | logger=migrator t=2025-06-18T15:19:09.679420815Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.680575573Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.148328ms grafana | logger=migrator t=2025-06-18T15:19:09.684651477Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.686733423Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.080106ms grafana | logger=migrator t=2025-06-18T15:19:09.690241552Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-18T15:19:09.69127987Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.035508ms grafana | logger=migrator t=2025-06-18T15:19:09.696479031Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-18T15:19:09.698547858Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.068137ms grafana | logger=migrator t=2025-06-18T15:19:09.702769202Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.704209044Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.439102ms grafana | logger=migrator t=2025-06-18T15:19:09.70883Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:09.70998355Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.15482ms grafana | logger=migrator t=2025-06-18T15:19:09.713303786Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-18T15:19:09.715752486Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.44665ms grafana | logger=migrator t=2025-06-18T15:19:09.722947833Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-18T15:19:09.748188156Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.229273ms grafana | logger=migrator t=2025-06-18T15:19:09.7624218Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-18T15:19:09.771702305Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.280665ms grafana | logger=migrator t=2025-06-18T15:19:09.775688336Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-18T15:19:09.781909936Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.22123ms grafana | logger=migrator t=2025-06-18T15:19:09.78483432Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-18T15:19:09.785084382Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=248.822µs grafana | logger=migrator t=2025-06-18T15:19:09.789549158Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-18T15:19:09.800990249Z level=info msg="Migration successfully executed" id="add share column" duration=11.401511ms grafana | logger=migrator t=2025-06-18T15:19:09.80480104Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-18T15:19:09.805113373Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=313.494µs grafana | logger=migrator t=2025-06-18T15:19:09.80854754Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-18T15:19:09.810348264Z level=info msg="Migration successfully executed" id="create file table" duration=1.800214ms grafana | logger=migrator t=2025-06-18T15:19:09.818254858Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-18T15:19:09.819583709Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.328581ms grafana | logger=migrator t=2025-06-18T15:19:09.822872135Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-18T15:19:09.824607229Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.736124ms grafana | logger=migrator t=2025-06-18T15:19:09.829110225Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-18T15:19:09.829907901Z level=info msg="Migration successfully executed" id="create file_meta table" duration=797.476µs grafana | logger=migrator t=2025-06-18T15:19:09.833536811Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-18T15:19:09.834617149Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.079508ms grafana | logger=migrator t=2025-06-18T15:19:09.837751524Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-18T15:19:09.837769574Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=18.22µs grafana | logger=migrator t=2025-06-18T15:19:09.840676608Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-18T15:19:09.840694468Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=18.31µs grafana | logger=migrator t=2025-06-18T15:19:09.844964442Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-18T15:19:09.847987426Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=3.018234ms grafana | logger=migrator t=2025-06-18T15:19:09.853727442Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-18T15:19:09.854249536Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=514.194µs grafana | logger=migrator t=2025-06-18T15:19:09.857847125Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-18T15:19:09.85967902Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.830045ms grafana | logger=migrator t=2025-06-18T15:19:09.863150598Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-18T15:19:09.873234549Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.082391ms grafana | logger=migrator t=2025-06-18T15:19:09.878650662Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-18T15:19:09.878896184Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=247.012µs grafana | logger=migrator t=2025-06-18T15:19:09.883812284Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-18T15:19:09.885784909Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.972715ms grafana | logger=migrator t=2025-06-18T15:19:09.90201746Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-18T15:19:09.902823406Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=807.816µs grafana | logger=migrator t=2025-06-18T15:19:09.909099787Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-18T15:19:09.909622361Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=528.424µs grafana | logger=migrator t=2025-06-18T15:19:09.913946055Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-18T15:19:09.914876813Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=929.498µs grafana | logger=migrator t=2025-06-18T15:19:09.9183029Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-18T15:19:09.928649004Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.345464ms grafana | logger=migrator t=2025-06-18T15:19:09.931846699Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-18T15:19:09.942278973Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=10.431534ms grafana | logger=migrator t=2025-06-18T15:19:09.946629987Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-18T15:19:09.947657565Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.027258ms grafana | logger=migrator t=2025-06-18T15:19:09.95068897Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-18T15:19:10.040295734Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=89.598882ms grafana | logger=migrator t=2025-06-18T15:19:10.066006093Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-18T15:19:10.068701755Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.696952ms grafana | logger=migrator t=2025-06-18T15:19:10.073920807Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-18T15:19:10.075718051Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.797964ms grafana | logger=migrator t=2025-06-18T15:19:10.07920002Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-18T15:19:10.10861722Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=29.39409ms grafana | logger=migrator t=2025-06-18T15:19:10.113223118Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-18T15:19:10.122532523Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.308615ms grafana | logger=migrator t=2025-06-18T15:19:10.12833938Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-18T15:19:10.128878085Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=541.245µs grafana | logger=migrator t=2025-06-18T15:19:10.133462382Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-18T15:19:10.133744084Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=281.252µs grafana | logger=migrator t=2025-06-18T15:19:10.137098701Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-18T15:19:10.137391754Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=292.543µs grafana | logger=migrator t=2025-06-18T15:19:10.14062226Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-18T15:19:10.140939383Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=319.813µs grafana | logger=migrator t=2025-06-18T15:19:10.147196544Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-18T15:19:10.147697918Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=505.894µs grafana | logger=migrator t=2025-06-18T15:19:10.153718207Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-18T15:19:10.154919976Z level=info msg="Migration successfully executed" id="create folder table" duration=1.201729ms grafana | logger=migrator t=2025-06-18T15:19:10.158012503Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-18T15:19:10.159177302Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.164259ms grafana | logger=migrator t=2025-06-18T15:19:10.162139266Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-18T15:19:10.163292835Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.152559ms grafana | logger=migrator t=2025-06-18T15:19:10.168199995Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-18T15:19:10.168223965Z level=info msg="Migration successfully executed" id="Update folder title length" duration=23.47µs grafana | logger=migrator t=2025-06-18T15:19:10.171029708Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-18T15:19:10.171886095Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=852.297µs grafana | logger=migrator t=2025-06-18T15:19:10.178250747Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-18T15:19:10.179089794Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=838.637µs grafana | logger=migrator t=2025-06-18T15:19:10.184292276Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-18T15:19:10.186121312Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.829316ms grafana | logger=migrator t=2025-06-18T15:19:10.189210727Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-18T15:19:10.18965142Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=440.523µs grafana | logger=migrator t=2025-06-18T15:19:10.192585914Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-18T15:19:10.192874586Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=287.942µs grafana | logger=migrator t=2025-06-18T15:19:10.207908258Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-18T15:19:10.209048838Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.13804ms grafana | logger=migrator t=2025-06-18T15:19:10.213881757Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-18T15:19:10.215120427Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.23444ms grafana | logger=migrator t=2025-06-18T15:19:10.218318983Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-18T15:19:10.219419002Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.099639ms grafana | logger=migrator t=2025-06-18T15:19:10.224438843Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-18T15:19:10.225596843Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.15755ms grafana | logger=migrator t=2025-06-18T15:19:10.228595517Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-18T15:19:10.229683496Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.087359ms grafana | logger=migrator t=2025-06-18T15:19:10.232428118Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-18T15:19:10.233505407Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.076579ms grafana | logger=migrator t=2025-06-18T15:19:10.24004749Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-18T15:19:10.241004048Z level=info msg="Migration successfully executed" id="create anon_device table" duration=955.368µs grafana | logger=migrator t=2025-06-18T15:19:10.244185214Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-18T15:19:10.246022919Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.837485ms grafana | logger=migrator t=2025-06-18T15:19:10.250440595Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-18T15:19:10.251555734Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.114529ms grafana | logger=migrator t=2025-06-18T15:19:10.254647109Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-18T15:19:10.255548026Z level=info msg="Migration successfully executed" id="create signing_key table" duration=900.407µs grafana | logger=migrator t=2025-06-18T15:19:10.258646941Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-18T15:19:10.259759291Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.1118ms grafana | logger=migrator t=2025-06-18T15:19:10.263880964Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-18T15:19:10.26573688Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.855146ms grafana | logger=migrator t=2025-06-18T15:19:10.269130877Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-18T15:19:10.269603051Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=470.674µs grafana | logger=migrator t=2025-06-18T15:19:10.27310493Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-18T15:19:10.282689508Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.583798ms grafana | logger=migrator t=2025-06-18T15:19:10.286911933Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-18T15:19:10.287456677Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=542.694µs grafana | logger=migrator t=2025-06-18T15:19:10.290846114Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-18T15:19:10.290864875Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=18.841µs grafana | logger=migrator t=2025-06-18T15:19:10.293413405Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-18T15:19:10.295122769Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.708864ms grafana | logger=migrator t=2025-06-18T15:19:10.300428142Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-18T15:19:10.300449232Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=21.12µs grafana | logger=migrator t=2025-06-18T15:19:10.30507666Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-18T15:19:10.306281389Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.203779ms grafana | logger=migrator t=2025-06-18T15:19:10.309683628Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-18T15:19:10.311498562Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.813874ms grafana | logger=migrator t=2025-06-18T15:19:10.315838478Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-18T15:19:10.317569961Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.731123ms grafana | logger=migrator t=2025-06-18T15:19:10.320895829Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-18T15:19:10.321914237Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.017528ms grafana | logger=migrator t=2025-06-18T15:19:10.325379535Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-18T15:19:10.326129192Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=749.657µs grafana | logger=migrator t=2025-06-18T15:19:10.330989911Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-18T15:19:10.331341204Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=352.073µs grafana | logger=migrator t=2025-06-18T15:19:10.359986188Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-18T15:19:10.360986416Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=999.848µs grafana | logger=migrator t=2025-06-18T15:19:10.365003428Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-18T15:19:10.366375Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.371001ms grafana | logger=migrator t=2025-06-18T15:19:10.369625986Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-18T15:19:10.370606244Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=979.917µs grafana | logger=migrator t=2025-06-18T15:19:10.374647317Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-18T15:19:10.386249281Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.601414ms grafana | logger=migrator t=2025-06-18T15:19:10.389300356Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-18T15:19:10.397763845Z level=info msg="Migration successfully executed" id="add region_slug column" duration=8.462019ms grafana | logger=migrator t=2025-06-18T15:19:10.401717248Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-18T15:19:10.409048317Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.331419ms grafana | logger=migrator t=2025-06-18T15:19:10.412308513Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-18T15:19:10.421850912Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.541409ms grafana | logger=migrator t=2025-06-18T15:19:10.425125508Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-18T15:19:10.425253459Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=127.181µs grafana | logger=migrator t=2025-06-18T15:19:10.43146288Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-18T15:19:10.433753378Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.293048ms grafana | logger=migrator t=2025-06-18T15:19:10.438428887Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-18T15:19:10.448433378Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.004791ms grafana | logger=migrator t=2025-06-18T15:19:10.451835075Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-18T15:19:10.451959986Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=125.271µs grafana | logger=migrator t=2025-06-18T15:19:10.454247035Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-18T15:19:10.455087963Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=840.648µs grafana | logger=migrator t=2025-06-18T15:19:10.459062135Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:10.483696496Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.634201ms grafana | logger=migrator t=2025-06-18T15:19:10.490322209Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-18T15:19:10.491916082Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.593753ms grafana | logger=migrator t=2025-06-18T15:19:10.506075088Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:10.508253306Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.176608ms grafana | logger=migrator t=2025-06-18T15:19:10.513985182Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:10.514316065Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=330.503µs grafana | logger=migrator t=2025-06-18T15:19:10.517473381Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:10.518795231Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.31623ms grafana | logger=migrator t=2025-06-18T15:19:10.522444581Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-18T15:19:10.54813618Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=25.692989ms grafana | logger=migrator t=2025-06-18T15:19:10.552337344Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-18T15:19:10.553273662Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=935.818µs grafana | logger=migrator t=2025-06-18T15:19:10.556556549Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-18T15:19:10.557695538Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.138699ms grafana | logger=migrator t=2025-06-18T15:19:10.560933184Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-18T15:19:10.561370999Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=437.245µs grafana | logger=migrator t=2025-06-18T15:19:10.565583682Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-18T15:19:10.5665767Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=992.368µs grafana | logger=migrator t=2025-06-18T15:19:10.57015912Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-18T15:19:10.579553336Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=9.393756ms grafana | logger=migrator t=2025-06-18T15:19:10.582629421Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-18T15:19:10.59222381Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=9.594509ms grafana | logger=migrator t=2025-06-18T15:19:10.596040661Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-18T15:19:10.605021734Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=8.979383ms grafana | logger=migrator t=2025-06-18T15:19:10.61319422Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-18T15:19:10.625628741Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=12.433641ms grafana | logger=migrator t=2025-06-18T15:19:10.628781688Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-18T15:19:10.637044635Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=8.261637ms grafana | logger=migrator t=2025-06-18T15:19:10.650406873Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-18T15:19:10.662320001Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=11.913498ms grafana | logger=migrator t=2025-06-18T15:19:10.665632868Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-18T15:19:10.666309713Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=675.515µs grafana | logger=migrator t=2025-06-18T15:19:10.669561619Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-18T15:19:10.705702494Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.119135ms grafana | logger=migrator t=2025-06-18T15:19:10.713530058Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-18T15:19:10.723051745Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=9.520887ms grafana | logger=migrator t=2025-06-18T15:19:10.729011784Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-18T15:19:10.737645244Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=8.63219ms grafana | logger=migrator t=2025-06-18T15:19:10.741853899Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-18T15:19:10.748599883Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=6.745554ms grafana | logger=migrator t=2025-06-18T15:19:10.753135021Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-18T15:19:10.76291058Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.773968ms grafana | logger=migrator t=2025-06-18T15:19:10.766171317Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-18T15:19:10.766199397Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=28.46µs grafana | logger=migrator t=2025-06-18T15:19:10.769715245Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-18T15:19:10.769733506Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=18.811µs grafana | logger=migrator t=2025-06-18T15:19:10.773863629Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:10.78375716Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.892461ms grafana | logger=migrator t=2025-06-18T15:19:10.787741392Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:10.794851Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.108938ms grafana | logger=migrator t=2025-06-18T15:19:10.798181407Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-18T15:19:10.79850775Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=325.513µs grafana | logger=migrator t=2025-06-18T15:19:10.801846967Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-18T15:19:10.802076929Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=225.832µs grafana | logger=migrator t=2025-06-18T15:19:10.806462265Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:10.818770805Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.30913ms grafana | logger=migrator t=2025-06-18T15:19:10.822005542Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:10.832173624Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=10.173592ms grafana | logger=migrator t=2025-06-18T15:19:10.835437601Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-18T15:19:10.84394956Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=8.509089ms grafana | logger=migrator t=2025-06-18T15:19:10.849391284Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-18T15:19:10.862014547Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=12.623923ms grafana | logger=migrator t=2025-06-18T15:19:10.867804625Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-18T15:19:10.868203338Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=400.313µs grafana | logger=migrator t=2025-06-18T15:19:10.872301732Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:10.884705292Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=12.39981ms grafana | logger=migrator t=2025-06-18T15:19:10.888741685Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:10.898068531Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.326826ms grafana | logger=migrator t=2025-06-18T15:19:10.903496556Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-18T15:19:10.903918269Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=421.443µs grafana | logger=migrator t=2025-06-18T15:19:10.908556636Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-18T15:19:10.909405304Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=848.108µs grafana | logger=migrator t=2025-06-18T15:19:10.913196444Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-18T15:19:10.914924669Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.727265ms grafana | logger=migrator t=2025-06-18T15:19:10.929246855Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-18T15:19:10.929273925Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=28.37µs grafana | logger=migrator t=2025-06-18T15:19:10.933857282Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-18T15:19:10.933884512Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=28.21µs grafana | logger=migrator t=2025-06-18T15:19:10.937698124Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-18T15:19:10.938296949Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=598.255µs grafana | logger=migrator t=2025-06-18T15:19:10.942078999Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:10.952160062Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=10.079763ms grafana | logger=migrator t=2025-06-18T15:19:10.956160234Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:10.96539461Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=9.232546ms grafana | logger=migrator t=2025-06-18T15:19:10.968909398Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-18T15:19:10.969929786Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.019818ms grafana | logger=migrator t=2025-06-18T15:19:10.976205338Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-18T15:19:10.977942792Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.734443ms grafana | logger=migrator t=2025-06-18T15:19:10.982756811Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-18T15:19:10.994440926Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=11.684145ms grafana | logger=migrator t=2025-06-18T15:19:10.997876934Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:11.005572287Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.701183ms grafana | logger=migrator t=2025-06-18T15:19:11.010399886Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-18T15:19:11.010425687Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-18T15:19:11.010698549Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-18T15:19:11.010718779Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=319.333µs grafana | logger=migrator t=2025-06-18T15:19:11.014862902Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-18T15:19:11.015457418Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=593.876µs grafana | logger=migrator t=2025-06-18T15:19:11.024808594Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-18T15:19:11.027122762Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.363119ms grafana | logger=migrator t=2025-06-18T15:19:11.032139984Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-18T15:19:11.033599055Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.460041ms grafana | logger=migrator t=2025-06-18T15:19:11.037984691Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-18T15:19:11.040131868Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.146967ms grafana | logger=migrator t=2025-06-18T15:19:11.04401964Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-18T15:19:11.046077527Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.057457ms grafana | logger=migrator t=2025-06-18T15:19:11.049842057Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:11.0574684Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=7.625273ms grafana | logger=migrator t=2025-06-18T15:19:11.068687791Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:11.079275237Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=10.586525ms grafana | logger=migrator t=2025-06-18T15:19:11.085010053Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-18T15:19:11.092396634Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=7.386461ms grafana | logger=migrator t=2025-06-18T15:19:11.095948913Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-18T15:19:11.105136948Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.188085ms grafana | logger=migrator t=2025-06-18T15:19:11.108738977Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-18T15:19:11.10907878Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-18T15:19:11.109195511Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=429.864µs grafana | logger=migrator t=2025-06-18T15:19:11.113391685Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-18T15:19:11.115018468Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.626683ms grafana | logger=migrator t=2025-06-18T15:19:11.119423224Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.760729123s grafana | logger=migrator t=2025-06-18T15:19:11.12010155Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-18T15:19:11.135080861Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-18T15:19:11.135337483Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-18T15:19:11.140810128Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-18T15:19:11.259134662Z level=info msg="Restored cache from database" duration=614.896µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.267943384Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-18T15:19:11.267957084Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-18T15:19:11.275374113Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-18T15:19:11.276203501Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=829.208µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.279648699Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-18T15:19:11.279665019Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=16.91µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.282716874Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-18T15:19:11.282816774Z level=info msg="Migration successfully executed" id="drop table resource" duration=99.96µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.287904476Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-18T15:19:11.288718073Z level=info msg="Migration successfully executed" id="create table resource" duration=813.606µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.296631397Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-18T15:19:11.297612955Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=980.458µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.30066001Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.30073294Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=72.801µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.306675929Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.308430013Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.755364ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.314814915Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-18T15:19:11.316153096Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.337601ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.319301181Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-18T15:19:11.321276268Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.972557ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.326792363Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-18T15:19:11.326947254Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=147.571µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.33143533Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-18T15:19:11.332306448Z level=info msg="Migration successfully executed" id="create table resource_version" duration=870.858µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.336749343Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-18T15:19:11.339215154Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=2.4683ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.350383704Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-18T15:19:11.350535066Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=155.302µs grafana | logger=resource-migrator t=2025-06-18T15:19:11.35712638Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-18T15:19:11.359511438Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.384408ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.363004788Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-18T15:19:11.365083104Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.078026ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.370600239Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-18T15:19:11.37188108Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.280781ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.376431386Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.390617993Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=14.186687ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.39395778Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-18T15:19:11.40142119Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=7.46278ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.404569676Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-18T15:19:11.405903057Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.330211ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.41116263Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-18T15:19:11.413271957Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=2.108757ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.417536221Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.42970345Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.167879ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.432629005Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-18T15:19:11.441459167Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=8.829182ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.446388276Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-18T15:19:11.446532857Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-18T15:19:11.447478185Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=1.089139ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.451037144Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-18T15:19:11.452506347Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.466573ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.45539096Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.465789044Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.397794ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.472023226Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-18T15:19:11.473991671Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.966165ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.478215196Z level=info msg="migrations completed" performed=26 skipped=0 duration=202.901733ms grafana | logger=resource-migrator t=2025-06-18T15:19:11.479366665Z level=info msg="Unlocking database" grafana | t=2025-06-18T15:19:11.479907849Z level=info caller=logger.go:214 time=2025-06-18T15:19:11.479877519Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-18T15:19:11.491886577Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-18T15:19:11.527795619Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-18T15:19:11.52782449Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-18T15:19:11.52785383Z level=info msg="Plugins loaded" count=53 duration=35.968913ms grafana | logger=query_data t=2025-06-18T15:19:11.533185483Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-18T15:19:11.538102643Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-18T15:19:11.556385253Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-18T15:19:11.568492091Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-18T15:19:11.568515801Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-18T15:19:11.571998089Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-18T15:19:11.573932145Z level=info msg="Warming state cache for startup" grafana | logger=http.server t=2025-06-18T15:19:11.574927143Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:11.575083284Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=grafanaStorageLogger t=2025-06-18T15:19:11.581282315Z level=info msg="Storage starting" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-18T15:19:11.582078912Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=sqlstore.transactions t=2025-06-18T15:19:11.655961873Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=ngalert.state.manager t=2025-06-18T15:19:11.68254418Z level=info msg="State cache has been initialized" states=0 duration=108.611685ms grafana | logger=ngalert.scheduler t=2025-06-18T15:19:11.68260627Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-18T15:19:11.682663931Z level=info msg=starting first_tick=2025-06-18T15:19:20Z grafana | logger=plugins.update.checker t=2025-06-18T15:19:11.684654906Z level=info msg="Update check succeeded" duration=103.642394ms grafana | logger=grafana.update.checker t=2025-06-18T15:19:11.693417959Z level=info msg="Update check succeeded" duration=112.509977ms grafana | logger=provisioning.datasources t=2025-06-18T15:19:11.696640295Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-18T15:19:11.715991032Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=provisioning.alerting t=2025-06-18T15:19:11.729030378Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-18T15:19:11.729110858Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-18T15:19:11.731216156Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2025-06-18T15:19:11.733725406Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-18T15:19:11.769352147Z level=info msg="Patterns update finished" duration=99.071178ms grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.088561021Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.089308597Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.090488527Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.09213305Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.092686045Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.093851405Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.099740203Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.100793992Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-18T15:19:12.101301496Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-18T15:19:12.15530708Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-18T15:19:12.487322085Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-18T15:19:12.566923908Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-18T15:19:12.591214348Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:12.591298288Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=1.016196674s grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:12.591350719Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=provisioning.dashboard t=2025-06-18T15:19:12.687278386Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-18T15:19:13.37857847Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-18T15:19:13.43943161Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-18T15:19:13.459956708Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:13.459982288Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=868.592119ms grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:13.460013818Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=plugin.installer t=2025-06-18T15:19:13.918123347Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-18T15:19:14.057571142Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-18T15:19:14.083103531Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:14.083262612Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=623.240644ms grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:14.083318483Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-18T15:19:14.431467218Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-18T15:19:14.485072117Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-18T15:19:14.500908528Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-18T15:19:14.500931518Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=417.535605ms grafana | logger=infra.usagestats t=2025-06-18T15:19:52.589820869Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-18 15:19:11,941] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,941] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,941] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,941] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,941] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,941] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,942] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,945] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:11,949] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-18 15:19:11,954] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-18 15:19:11,961] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:11,992] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:11,992] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:12,002] INFO Socket connection established, initiating session, client: /172.17.0.8:52716, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:12,034] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000021ff00000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:12,163] INFO Session: 0x10000021ff00000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:12,163] INFO EventThread shut down for session: 0x10000021ff00000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-18 15:19:12,873] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-18 15:19:13,176] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-18 15:19:13,266] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-18 15:19:13,268] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-18 15:19:13,268] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-18 15:19:13,286] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 15:19:13,291] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,291] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,291] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,291] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,291] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,291] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,292] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,293] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,293] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,293] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,293] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,293] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,298] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-18 15:19:13,304] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-18 15:19:13,310] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:13,312] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 15:19:13,321] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:13,327] INFO Socket connection established, initiating session, client: /172.17.0.8:52718, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:13,339] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000021ff00001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-18 15:19:13,344] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-18 15:19:13,699] INFO Cluster ID = N2-PXQtqQN2FT0sdVSBvPw (kafka.server.KafkaServer) kafka | [2025-06-18 15:19:13,704] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-18 15:19:13,767] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-18 15:19:13,810] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 15:19:13,811] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 15:19:13,813] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 15:19:13,815] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-18 15:19:13,863] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-18 15:19:13,867] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-18 15:19:13,878] INFO Loaded 0 logs in 15ms. (kafka.log.LogManager) kafka | [2025-06-18 15:19:13,879] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-18 15:19:13,881] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-18 15:19:13,896] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-18 15:19:13,945] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-18 15:19:13,960] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-18 15:19:13,975] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-18 15:19:14,040] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 15:19:14,405] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-18 15:19:14,409] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-18 15:19:14,433] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-18 15:19:14,433] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-18 15:19:14,434] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-18 15:19:14,439] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-18 15:19:14,445] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 15:19:14,464] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,466] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,468] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,473] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,484] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-18 15:19:14,516] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-18 15:19:14,545] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750259954532,1750259954532,1,0,0,72057603163684865,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-18 15:19:14,546] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-18 15:19:14,601] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-18 15:19:14,609] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,615] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,617] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,626] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-18 15:19:14,639] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:14,643] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,647] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:14,649] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,653] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-18 15:19:14,669] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-18 15:19:14,675] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-18 15:19:14,675] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-18 15:19:14,688] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-18 15:19:14,688] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,694] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,698] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,701] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,717] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-18 15:19:14,717] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,723] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,728] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-18 15:19:14,753] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-18 15:19:14,756] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-18 15:19:14,758] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,758] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,759] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,759] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,765] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,765] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,766] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,766] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-18 15:19:14,767] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,771] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-18 15:19:14,772] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-18 15:19:14,780] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 15:19:14,782] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 15:19:14,790] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 15:19:14,791] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-18 15:19:14,792] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 15:19:14,792] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 15:19:14,793] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 15:19:14,793] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 15:19:14,793] INFO Kafka startTimeMs: 1750259954782 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-18 15:19:14,795] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-18 15:19:14,796] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-18 15:19:14,797] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,798] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-18 15:19:14,803] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,805] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,806] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,806] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,809] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,823] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:14,857] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 15:19:14,865] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 15:19:14,867] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-18 15:19:19,825] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:19,826] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:43,604] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 15:19:43,605] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 15:19:43,609] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:43,617] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:43,645] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UWnQiv87REufwgI0ry9JIQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(F9zrgbgSRwWlmafykWvYuA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:43,647] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-18 15:19:43,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,649] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,650] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:19:43,651] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:19:43,658] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,800] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,801] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,802] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 15:19:43,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 15:19:43,807] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 15:19:43,808] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 15:19:43,810] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-18 15:19:43,812] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,816] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:19:43,816] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,821] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,824] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,825] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,826] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,827] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 15:19:43,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 15:19:43,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 15:19:43,872] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 15:19:43,873] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 15:19:43,874] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 15:19:43,875] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 15:19:43,877] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-18 15:19:43,878] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-18 15:19:43,950] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:43,962] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:43,963] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,964] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,965] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:43,991] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:43,992] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:43,992] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,992] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,992] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:43,998] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:43,998] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:43,998] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,998] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:43,998] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,005] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,005] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,006] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,006] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,006] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,015] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,016] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,016] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,016] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,016] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,024] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,025] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,025] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,025] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,025] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,032] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,033] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,033] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,033] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,033] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,040] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,041] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,041] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,041] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,041] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,049] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,050] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,050] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,050] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,050] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,057] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,058] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,058] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,058] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,058] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,064] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,065] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,065] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,065] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,065] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,071] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,072] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,072] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,072] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,072] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,080] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,081] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,081] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,081] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,081] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,089] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,090] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,090] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,090] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,090] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,100] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,101] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,101] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,101] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,101] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,108] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,108] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,108] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,108] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,108] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,116] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,116] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,117] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,117] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,117] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,123] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,124] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,124] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,124] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,124] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,131] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,132] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,132] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,132] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,132] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,139] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,140] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,140] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,140] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,140] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,146] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,147] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,147] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,147] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,148] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,154] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,155] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,155] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,155] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,155] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,163] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,163] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,164] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,164] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,164] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,170] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,171] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,171] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,171] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,171] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,178] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,179] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,179] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,179] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,179] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,188] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,189] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,189] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,189] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,189] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,197] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,197] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,198] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,198] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,198] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,203] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,204] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,204] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,204] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,204] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,211] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,211] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,211] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,211] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,211] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,218] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,219] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,219] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,219] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,219] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,236] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,237] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,237] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,237] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,237] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,244] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,246] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,246] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,246] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,246] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,252] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,252] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,252] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,252] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,252] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,259] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,259] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,259] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,259] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,259] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,267] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,267] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,267] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,267] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,267] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UWnQiv87REufwgI0ry9JIQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,278] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,279] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,279] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,279] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,279] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,287] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,288] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,288] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,288] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,288] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,295] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,296] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,296] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,296] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,296] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,303] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,304] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,304] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,304] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,304] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,310] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,311] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,311] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,311] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,311] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,318] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,319] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,319] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,319] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,319] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,326] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,326] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,326] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,326] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,326] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,334] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,334] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,335] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,335] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,335] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,342] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,343] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,343] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,343] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,343] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,350] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,351] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,351] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,351] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,351] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,358] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,359] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,359] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,359] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,359] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,384] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,385] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,385] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,385] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,385] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,392] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,392] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,393] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,393] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,393] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,399] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,400] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,400] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,400] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,400] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,407] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,408] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,408] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,408] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,408] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,417] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:19:44,417] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-18 15:19:44,417] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,417] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:19:44,418] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(F9zrgbgSRwWlmafykWvYuA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-18 15:19:44,424] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-18 15:19:44,425] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-18 15:19:44,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,450] INFO [Broker id=1] Finished LeaderAndIsr request in 629ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-18 15:19:44,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,451] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,452] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,457] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=F9zrgbgSRwWlmafykWvYuA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=UWnQiv87REufwgI0ry9JIQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 15:19:44,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-18 15:19:44,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,469] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-18 15:19:44,470] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 15:19:44,524] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:44,538] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:45,406] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4635555d-36e7-41ae-9c08-89802cf0473f in Empty state. Created a new member id consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:45,409] INFO [GroupCoordinator 1]: Preparing to rebalance group 4635555d-36e7-41ae-9c08-89802cf0473f in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010 with group instance id None; client reason: need to re-join with the given member-id: consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:47,552] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:47,573] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:48,412] INFO [GroupCoordinator 1]: Stabilized group 4635555d-36e7-41ae-9c08-89802cf0473f generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:19:48,418] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010 for group 4635555d-36e7-41ae-9c08-89802cf0473f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:20:28,741] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-68a1eb7b-ab7a-4e37-938e-44912f1305cc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:20:28,743] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-68a1eb7b-ab7a-4e37-938e-44912f1305cc with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:20:31,744] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:20:31,749] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-68a1eb7b-ab7a-4e37-938e-44912f1305cc for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:21:39,513] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-18 15:21:39,530] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(xJcbVHnGScOEo_pXrO0V3g),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-18 15:21:39,530] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-18 15:21:39,530] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-18 15:21:39,530] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-18 15:21:39,531] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,547] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-18 15:21:39,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 15:21:39,548] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-18 15:21:39,548] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,549] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-18 15:21:39,549] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,551] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,551] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-18 15:21:39,552] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 15:21:39,552] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-18 15:21:39,552] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,556] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-18 15:21:39,561] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-18 15:21:39,562] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-18 15:21:39,562] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-18 15:21:39,562] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(xJcbVHnGScOEo_pXrO0V3g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-18 15:21:39,567] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-18 15:21:39,567] INFO [Broker id=1] Finished LeaderAndIsr request in 16ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-18 15:21:39,568] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=xJcbVHnGScOEo_pXrO0V3g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 15:21:39,570] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-18 15:21:39,571] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-18 15:21:39,572] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-18 15:23:17,981] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-94251c72-d0fc-4fb4-82b7-b70e72175c7c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:17,984] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-94251c72-d0fc-4fb4-82b7-b70e72175c7c with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:20,985] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:20,989] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-94251c72-d0fc-4fb4-82b7-b70e72175c7c for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:21,106] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-94251c72-d0fc-4fb4-82b7-b70e72175c7c on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:21,107] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:21,110] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-94251c72-d0fc-4fb4-82b7-b70e72175c7c, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:43,870] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-d386d7d6-c159-4c4b-897d-dcf1240ac0ca and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:43,872] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-d386d7d6-c159-4c4b-897d-dcf1240ac0ca with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:46,873] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:46,877] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-d386d7d6-c159-4c4b-897d-dcf1240ac0ca for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:46,886] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-d386d7d6-c159-4c4b-897d-dcf1240ac0ca on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:46,886] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:23:46,888] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-d386d7d6-c159-4c4b-897d-dcf1240ac0ca, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:08,544] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-77edabc6-7553-4105-a882-0803d67dbd85 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:08,546] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-77edabc6-7553-4105-a882-0803d67dbd85 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:11,547] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:11,555] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-77edabc6-7553-4105-a882-0803d67dbd85 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:11,563] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-77edabc6-7553-4105-a882-0803d67dbd85 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:11,563] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:11,563] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-77edabc6-7553-4105-a882-0803d67dbd85, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-18 15:24:19,828] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-18 15:24:19,829] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-18 15:24:19,834] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2025-06-18 15:24:19,836] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-18T15:19:21.323+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-18T15:19:21.400+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 34 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-18T15:19:21.401+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-18T15:19:22.993+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-18T15:19:23.182+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 177 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-18T15:19:23.935+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-18T15:19:23.950+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-18T15:19:23.953+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-18T15:19:23.953+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-18T15:19:23.998+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-18T15:19:23.999+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2519 ms policy-api | [2025-06-18T15:19:24.376+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-18T15:19:24.466+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-18T15:19:24.519+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-18T15:19:24.978+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-18T15:19:25.025+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-18T15:19:25.259+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@62e99458 policy-api | [2025-06-18T15:19:25.261+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-18T15:19:25.350+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-18T15:19:27.568+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-18T15:19:27.572+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-18T15:19:28.258+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-18T15:19:29.195+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-18T15:19:30.400+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-18T15:19:30.456+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-18T15:19:31.183+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-18T15:19:31.351+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-18T15:19:31.380+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-18T15:19:31.409+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.851 seconds (process running for 11.551) policy-api | [2025-06-18T15:19:39.923+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-18T15:19:39.924+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-18T15:19:39.925+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-18T15:22:55.788+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-api | [] policy-api | [2025-06-18T15:24:11.848+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity policy-api | policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.3) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.3) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.322341 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.372678 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.421882 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.468786 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.519697 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.575962 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.628439 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.675059 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.721282 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.792707 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.83731 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.887167 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.95222 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:08.997391 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.047071 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.106511 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.162567 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.22733 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.28376 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.345197 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.400886 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.450336 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.494549 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.548181 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.61084 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.672495 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.732408 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.786648 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.842643 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.904107 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:09.95973 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.007929 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.090379 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.144604 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.2079 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.267346 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.323 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.381253 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.439432 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.505748 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.559247 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.612142 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.67221 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.730577 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.786131 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.836226 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.893678 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:10.954823 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.003044 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.052289 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.112105 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.160124 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.214551 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.27403 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.326709 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.378818 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.444491 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.493849 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.546082 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.607245 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.654116 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.704904 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.764255 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.822327 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.876542 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.92784 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:11.987372 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.037343 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.107611 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.166997 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.226163 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.287567 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.345499 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.401019 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.450287 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.495929 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.549097 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.601155 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.657323 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.708675 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.763207 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.830354 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.880821 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.927616 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:12.980556 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.03397 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.082435 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.141638 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.189785 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.238167 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.286005 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.33016 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.391525 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.442557 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.49541 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1806251519080800u | 1 | 2025-06-18 15:19:13.554351 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.604324 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.661503 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.715006 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.766702 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.831362 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.889613 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.94155 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:13.994749 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:14.058048 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:14.115359 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:14.165554 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:14.220386 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1806251519080900u | 1 | 2025-06-18 15:19:14.280747 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.337862 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.39621 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.451799 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.499514 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.545257 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.605611 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.663382 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.720468 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1806251519081000u | 1 | 2025-06-18 15:19:14.772868 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1806251519081100u | 1 | 2025-06-18 15:19:14.817486 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1806251519081200u | 1 | 2025-06-18 15:19:14.872638 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1806251519081200u | 1 | 2025-06-18 15:19:14.924811 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1806251519081200u | 1 | 2025-06-18 15:19:14.9815 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1806251519081200u | 1 | 2025-06-18 15:19:15.037987 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1806251519081300u | 1 | 2025-06-18 15:19:15.084329 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1806251519081300u | 1 | 2025-06-18 15:19:15.133642 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1806251519081300u | 1 | 2025-06-18 15:19:15.185322 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:15.868812 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:15.933876 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:15.991361 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.052195 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.107837 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.162506 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.222397 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.275838 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.330213 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.382889 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.438279 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.492713 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1806251519151400u | 1 | 2025-06-18 15:19:16.545725 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.598411 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.66318 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.721595 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.773435 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.85238 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.905945 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:16.95724 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1806251519151500u | 1 | 2025-06-18 15:19:17.008738 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1806251519151600u | 1 | 2025-06-18 15:19:17.061331 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1806251519151600u | 1 | 2025-06-18 15:19:17.111141 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1806251519151601u | 1 | 2025-06-18 15:19:17.159551 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1806251519151601u | 1 | 2025-06-18 15:19:17.204756 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1806251519151700u | 1 | 2025-06-18 15:19:17.263001 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1806251519151700u | 1 | 2025-06-18 15:19:17.321573 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1806251519151700u | 1 | 2025-06-18 15:19:17.37907 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.434565 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.489271 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.543335 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.592635 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.646471 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.69985 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.751285 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.805801 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1806251519151701u | 1 | 2025-06-18 15:19:17.859879 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1806251519181600u | 1 | 2025-06-18 15:19:18.522823 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1806251519191600u | 1 | 2025-06-18 15:19:19.190163 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1806251519191600u | 1 | 2025-06-18 15:19:19.259686 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-opa-pdp | Waiting for kafka port 9092... policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.8) port 9092 (tcp) failed: Connection refused policy-opa-pdp | Connection to kafka (172.17.0.8) 9092 port [tcp/*] succeeded! policy-opa-pdp | Waiting for pap port 6969... policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="OPA-PDP: Starting initialisation " policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="KAFKA_URL not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="PAP_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="PATCH_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="PATCH_GROUPID not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="API_USER not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="API_PASSWORD not defined, using default value" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="UseSASLForKAFKA not defined, using default value" policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="Username: " policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="Password: " policy-opa-pdp | time="2025-06-18T15:20:23Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" policy-opa-pdp | time="2025-06-18T15:20:23Z" level=debug msg="Configuration module: environment initialised" policy-opa-pdp | DEBU[2025-06-18T15:20:23.7061+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug policy-opa-pdp | DEBU[2025-06-18T15:20:23.7065+00:00] Name: opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 policy-opa-pdp | DEBU[2025-06-18T15:20:23.7111+00:00] Starting OPA PDP Service policy-opa-pdp | INFO[2025-06-18T15:20:28.7152+00:00] HTTP server started policy-opa-pdp | DEBU[2025-06-18T15:20:28.7165+00:00] Create an instance of OPA Object policy-opa-pdp | DEBU[2025-06-18T15:20:28.7166+00:00] Configure an instance of OPA Object policy-opa-pdp | DEBU[2025-06-18T15:20:28.7179+00:00] Topic start :::: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-18T15:20:28.7180+00:00] Creating Kafka Consumer singleton instance policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-18T15:20:28.7207+00:00] Topic Subscribed: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-18T15:20:28.7207+00:00] Created SIngleton consumer instance policy-opa-pdp | DEBU[2025-06-18T15:20:28.7288+00:00] Starting PDP Message Listener..... policy-opa-pdp | DEBU[2025-06-18T15:20:38.7314+00:00] New Ticker started with interval 60000 policy-opa-pdp | DEBU[2025-06-18T15:20:48.7315+00:00] After registration successful delay policy-opa-pdp | DEBU[2025-06-18T15:21:38.7491+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b5423181-f08c-41fb-90bd-99e43f7bd824","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750260098748","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:21:38.7491+00:00] Sending Heartbeat ... policy-opa-pdp | 2025/06/18 15:21:38 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:21:38.7812+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b5423181-f08c-41fb-90bd-99e43f7bd824","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750260098748","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:21:38.7815+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:38.7815+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.4332+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","timestampMs":1750260099325,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:21:39.4336+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:21:39.4343+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","timestampMs":1750260099325,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:21:39.4343+00:00] Policy Is Allowed: slice.capacity.check policy-opa-pdp | DEBU[2025-06-18T15:21:39.4344+00:00] Validating properties data for policy: slice.capacity.check policy-opa-pdp | DEBU[2025-06-18T15:21:39.4346+00:00] Validating properties policy for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-18T15:21:39.4352+00:00] Validation successful for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-18T15:21:39.4371+00:00] Directory created: /opt/policies/slice/capacity/check policy-opa-pdp | INFO[2025-06-18T15:21:39.4374+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego policy-opa-pdp | INFO[2025-06-18T15:21:39.4379+00:00] Directory created: /opt/data/node/slice/capacity/check policy-opa-pdp | INFO[2025-06-18T15:21:39.4382+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json policy-opa-pdp | DEBU[2025-06-18T15:21:39.4384+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-18T15:21:39.4583+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-18T15:21:39.4628+00:00] storage not found creating : /node policy-opa-pdp | DEBU[2025-06-18T15:21:39.4628+00:00] storage not found creating : /node/slice policy-opa-pdp | DEBU[2025-06-18T15:21:39.4629+00:00] storage not found creating : /node/slice/capacity policy-opa-pdp | DEBU[2025-06-18T15:21:39.4630+00:00] storage not found creating : /node/slice/capacity/check policy-opa-pdp | INFO[2025-06-18T15:21:39.4633+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:21:39.4633+00:00] Loaded Policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-18T15:21:39.4635+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-18T15:21:39.4637+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:21:39 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:21:39.4640+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"63147583-0a4b-408a-835b-ea90d1414001","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099463","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:21:39.4641+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:21:39.4641+00:00] 120000 policy-opa-pdp | DEBU[2025-06-18T15:21:39.4643+00:00] New Ticker started with interval 120000 policy-opa-pdp | DEBU[2025-06-18T15:21:39.4731+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"63147583-0a4b-408a-835b-ea90d1414001","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099463","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:21:39.4732+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.4733+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.5152+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0d469264-614a-4e62-a41a-855166b5b769","timestampMs":1750260099326,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:21:39.5153+00:00] messageType: PDP_STATE_CHANGE policy-opa-pdp | DEBU[2025-06-18T15:21:39.5154+00:00] PDP STATE CHANGE message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0d469264-614a-4e62-a41a-855166b5b769","timestampMs":1750260099326,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:21:39.5157+00:00] State change from PASSIVE To : ACTIVE policy-opa-pdp | INFO[2025-06-18T15:21:39.5157+00:00] Sending PDP Status With State Change response policy-opa-pdp | 2025/06/18 15:21:39 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:21:39.5160+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"0d469264-614a-4e62-a41a-855166b5b769","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"e118adbd-54c6-4af9-ad9a-f07be5e23a48","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099515","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:21:39.5160+00:00] PDP_STATUS With State Change Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:21:39.5303+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"0d469264-614a-4e62-a41a-855166b5b769","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"e118adbd-54c6-4af9-ad9a-f07be5e23a48","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099515","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:21:39.5304+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.5304+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.8752+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"87566c01-95a8-46a9-9868-b7451d94a3a9","timestampMs":1750260099847,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:21:39.8754+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:21:39.8758+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"87566c01-95a8-46a9-9868-b7451d94a3a9","timestampMs":1750260099847,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-18T15:21:39.8760+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:21:39 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:21:39.8763+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"87566c01-95a8-46a9-9868-b7451d94a3a9","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"bb51e931-86e5-4653-97da-252f45dc0e8b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099876","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:21:39.8764+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:21:39.8764+00:00] 120000 policy-opa-pdp | DEBU[2025-06-18T15:21:39.8842+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"87566c01-95a8-46a9-9868-b7451d94a3a9","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"bb51e931-86e5-4653-97da-252f45dc0e8b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099876","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:21:39.8842+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:21:39.8842+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:22:38.7429+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"f4280f1a-f2b0-4719-8ad1-603b801e230b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260158742","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:22:38.7429+00:00] Sending Heartbeat ... policy-opa-pdp | 2025/06/18 15:22:38 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:22:38.7528+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"f4280f1a-f2b0-4719-8ad1-603b801e230b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260158742","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:22:38.7529+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:22:38.7529+00:00] discarding event of type PDP_STATUS policy-opa-pdp | WARN[2025-06-18T15:22:55.5475+00:00] Invalid or Missing Request ID policy-opa-pdp | DEBU[2025-06-18T15:22:55.5476+00:00] Received Health Check message policy-opa-pdp | INFO[2025-06-18T15:22:55.5545+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:22:55.5546+00:00] datapath to get Data : / policy-opa-pdp | DEBU[2025-06-18T15:22:55.5547+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} policy-opa-pdp | DEBU[2025-06-18T15:22:56.9393+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9824476a-97d1-4e30-ac45-d9064a937d1e","timestampMs":1750260176885,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:22:56.9394+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:22:56.9396+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9824476a-97d1-4e30-ac45-d9064a937d1e","timestampMs":1750260176885,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:22:56.9396+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:22:56.9398+00:00] Policy is new and should be deployed: zoneB 1.0.6 policy-opa-pdp | DEBU[2025-06-18T15:22:56.9399+00:00] Policy Is Allowed: zoneB policy-opa-pdp | DEBU[2025-06-18T15:22:56.9399+00:00] Validating properties data for policy: zoneB policy-opa-pdp | DEBU[2025-06-18T15:22:56.9399+00:00] Validating properties policy for policy: zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9399+00:00] Validation successful for policy: zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9401+00:00] Directory created: /opt/policies/zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9402+00:00] Policy file saved: /opt/policies/zoneB/policy.rego policy-opa-pdp | INFO[2025-06-18T15:22:56.9403+00:00] Directory created: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9405+00:00] Data file saved: /opt/data/node/zoneB/data.json policy-opa-pdp | DEBU[2025-06-18T15:22:56.9406+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-18T15:22:56.9686+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-18T15:22:56.9729+00:00] storage not found creating : /node/zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9730+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "zoneB", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:22:56.9730+00:00] Loaded Policy: zoneB policy-opa-pdp | INFO[2025-06-18T15:22:56.9730+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | 2025/06/18 15:22:56 KafkaProducer or producer produce message policy-opa-pdp | INFO[2025-06-18T15:22:56.9731+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-18T15:22:56.9731+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9824476a-97d1-4e30-ac45-d9064a937d1e","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"17094e91-59c9-451b-8f86-c5144a657cdf","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260176973","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:22:56.9732+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:22:56.9732+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:22:56.9865+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9824476a-97d1-4e30-ac45-d9064a937d1e","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"17094e91-59c9-451b-8f86-c5144a657cdf","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260176973","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:22:56.9866+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:22:56.9866+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-18T15:23:21.1332+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:21.1333+00:00] datapath to get Data : /node/zoneB/zone policy-opa-pdp | DEBU[2025-06-18T15:23:21.1333+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} policy-opa-pdp | DEBU[2025-06-18T15:23:21.1453+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:21.1454+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:21.1458+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:23:21.1459+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"a9fb8e5b-241a-439d-bf16-8080637a714a","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":860,"timer_rego_query_compile_ns":160792,"timer_rego_query_eval_ns":587827,"timer_rego_query_parse_ns":121192,"timer_sdk_decision_eval_ns":1054744},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-18T15:23:21Z","timestamp":"2025-06-18T15:23:21.146023674Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-18T15:23:21.1478+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "a9fb8e5b-241a-439d-bf16-8080637a714a", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:21.1547+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:21.1548+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:21.1551+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-18T15:23:21.1551+00:00] Policy Name zoeB does not exist policy-opa-pdp | DEBU[2025-06-18T15:23:21.1618+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:21.1618+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:21.1621+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:23:21.1622+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-18T15:23:21.1631+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "1d76d93a-a2fe-4fd6-b660-3dd8c894a98a", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | {"decision_id":"1d76d93a-a2fe-4fd6-b660-3dd8c894a98a","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":710,"timer_rego_query_eval_ns":490006,"timer_sdk_decision_eval_ns":576707},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-18T15:23:21Z","timestamp":"2025-06-18T15:23:21.162235684Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-18T15:23:21.5421+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"933926b5-d92c-4a50-a058-dcd94cbaa465","timestampMs":1750260201494,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:21.5424+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:23:21.5428+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"933926b5-d92c-4a50-a058-dcd94cbaa465","timestampMs":1750260201494,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-18T15:23:21.5431+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-18T15:23:21.5432+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-18T15:23:21.5435+00:00] Deleting Policy from OPA : /zoneB policy-opa-pdp | DEBU[2025-06-18T15:23:21.5472+00:00] Removing policy directory: /opt/policies/zoneB policy-opa-pdp | DEBU[2025-06-18T15:23:21.5477+00:00] Deleting data from OPA : /node/zoneB policy-opa-pdp | DEBU[2025-06-18T15:23:21.5479+00:00] Analyzing dataPath: /node/zoneB policy-opa-pdp | DEBU[2025-06-18T15:23:21.5482+00:00] Path segments: [ node zoneB] policy-opa-pdp | DEBU[2025-06-18T15:23:21.5484+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB policy-opa-pdp | DEBU[2025-06-18T15:23:21.5486+00:00] Removing data directory: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-18T15:23:21.5491+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:21.5492+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:23:21.5495+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-18T15:23:21.5497+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:23:21 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:23:21.5502+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"933926b5-d92c-4a50-a058-dcd94cbaa465","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b17e16c2-4e0f-4a82-972e-8f59fc53d936","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260201549","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:23:21.5504+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:23:21.5506+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:23:21.5579+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"933926b5-d92c-4a50-a058-dcd94cbaa465","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b17e16c2-4e0f-4a82-972e-8f59fc53d936","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260201549","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:21.5580+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:21.5581+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:22.8316+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f460da34-916d-42bf-8d57-49ea1b0f96d7","timestampMs":1750260202809,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:22.8319+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:23:22.8321+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f460da34-916d-42bf-8d57-49ea1b0f96d7","timestampMs":1750260202809,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:22.8322+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:23:22.8323+00:00] Policy is new and should be deployed: vehicle 1.0.6 policy-opa-pdp | DEBU[2025-06-18T15:23:22.8324+00:00] Policy Is Allowed: vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:22.8325+00:00] Validating properties data for policy: vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:22.8325+00:00] Validating properties policy for policy: vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8326+00:00] Validation successful for policy: vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8328+00:00] Directory created: /opt/policies/vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8330+00:00] Policy file saved: /opt/policies/vehicle/policy.rego policy-opa-pdp | INFO[2025-06-18T15:23:22.8331+00:00] Directory created: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8332+00:00] Data file saved: /opt/data/node/vehicle/data.json policy-opa-pdp | DEBU[2025-06-18T15:23:22.8333+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-18T15:23:22.8612+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-18T15:23:22.8671+00:00] storage not found creating : /node/vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8673+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "vehicle", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:22.8674+00:00] Loaded Policy: vehicle policy-opa-pdp | INFO[2025-06-18T15:23:22.8675+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-18T15:23:22.8677+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:23:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:23:22.8682+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f460da34-916d-42bf-8d57-49ea1b0f96d7","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"fb19dab3-9bd0-45f3-8fdb-5b5e7801ab4b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260202867","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:23:22.8682+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:23:22.8683+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:23:22.8773+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f460da34-916d-42bf-8d57-49ea1b0f96d7","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"fb19dab3-9bd0-45f3-8fdb-5b5e7801ab4b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260202867","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:22.8773+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:22.8773+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:39.4789+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d911ab4e-4565-4b49-9bec-99386456b326","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260219478","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:39.4789+00:00] Sending Heartbeat ... policy-opa-pdp | 2025/06/18 15:23:39 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:23:39.4876+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d911ab4e-4565-4b49-9bec-99386456b326","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260219478","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:39.4877+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:39.4877+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-18T15:23:46.9082+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9083+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9085+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-18T15:23:46.9198+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9203+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-18T15:23:46.9203+00:00] data : [map[op:add path:/round value:trail]] policy-opa-pdp | INFO[2025-06-18T15:23:46.9203+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9203+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-18T15:23:46.9203+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-18T15:23:46.9205+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-18T15:23:46.9206+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9206+00:00] path : round policy-opa-pdp | INFO[2025-06-18T15:23:46.9206+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-18T15:23:46.9206+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-18T15:23:46.9206+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-18T15:23:46.9347+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9348+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9349+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-18T15:23:46.9455+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9461+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-18T15:23:46.9462+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] policy-opa-pdp | INFO[2025-06-18T15:23:46.9462+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9464+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-18T15:23:46.9465+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-18T15:23:46.9466+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-18T15:23:46.9467+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9468+00:00] path : round policy-opa-pdp | INFO[2025-06-18T15:23:46.9469+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-18T15:23:46.9470+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-18T15:23:46.9471+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-18T15:23:46.9546+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9546+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9547+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-18T15:23:46.9649+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9653+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-18T15:23:46.9654+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-18T15:23:46.9655+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9657+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-18T15:23:46.9658+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-18T15:23:46.9660+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-18T15:23:46.9660+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9662+00:00] path : round policy-opa-pdp | INFO[2025-06-18T15:23:46.9662+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-18T15:23:46.9664+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-18T15:23:46.9664+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-18T15:23:46.9732+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:46.9733+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:46.9733+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | DEBU[2025-06-18T15:23:46.9864+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:46.9865+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:46.9869+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:23:46.9870+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-18T15:23:46.9893+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "592b78f5-0b94-4c1e-9b3f-609f417592f8", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | {"decision_id":"592b78f5-0b94-4c1e-9b3f-609f417592f8","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":880,"timer_rego_query_compile_ns":189403,"timer_rego_query_eval_ns":550997,"timer_rego_query_parse_ns":123001,"timer_sdk_decision_eval_ns":1754103},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-18T15:23:46Z","timestamp":"2025-06-18T15:23:46.987077445Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-18T15:23:46.9985+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:46.9985+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:46.9988+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-18T15:23:46.9988+00:00] Policy Name vehile does not exist policy-opa-pdp | DEBU[2025-06-18T15:23:47.0072+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:23:47.0074+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:23:47.0077+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:23:47.0079+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"8ceba89e-75c2-44db-a3b4-51b5c16994c5","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":721,"timer_rego_query_eval_ns":425936,"timer_sdk_decision_eval_ns":523687},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-18T15:23:47Z","timestamp":"2025-06-18T15:23:47.008063509Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-18T15:23:47.0088+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "8ceba89e-75c2-44db-a3b4-51b5c16994c5", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:47.2990+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","timestampMs":1750260227276,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:47.2992+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:23:47.2997+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","timestampMs":1750260227276,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-18T15:23:47.2998+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-18T15:23:47.2998+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-18T15:23:47.2999+00:00] Deleting Policy from OPA : /vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.3039+00:00] Removing policy directory: /opt/policies/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.3041+00:00] Deleting data from OPA : /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.3042+00:00] Analyzing dataPath: /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.3042+00:00] Path segments: [ node vehicle] policy-opa-pdp | DEBU[2025-06-18T15:23:47.3042+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.3043+00:00] Removing data directory: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-18T15:23:47.3045+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:47.3045+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:23:47.3045+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-18T15:23:47.3046+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:23:47 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:23:47.3048+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"1bfce4f5-882f-4bce-a749-ee5d49bcb000","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260227304","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:23:47.3048+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:23:47.3048+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:23:47.3116+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"1bfce4f5-882f-4bce-a749-ee5d49bcb000","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260227304","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:47.3116+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:47.3117+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-18T15:23:47.6962+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:23:47.6963+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | WARN[2025-06-18T15:23:47.6963+00:00] Error in reading data under /node/vehicle path policy-opa-pdp | ERRO[2025-06-18T15:23:47.6964+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist policy-opa-pdp | INFO[2025-06-18T15:23:47.7092+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-18T15:23:47.7095+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-18T15:23:47.7096+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-18T15:23:47.7096+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-18T15:23:47.7098+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] policy-opa-pdp | ERRO[2025-06-18T15:23:47.7098+00:00] Policy associated with the patch request does not exists policy-opa-pdp | DEBU[2025-06-18T15:23:48.4552+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09990849-dc24-4be1-b4a6-aaafc3a68826","timestampMs":1750260228431,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:48.4590+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:23:48.4593+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09990849-dc24-4be1-b4a6-aaafc3a68826","timestampMs":1750260228431,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:23:48.4594+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:23:48.4594+00:00] Policy is new and should be deployed: abac 1.0.7 policy-opa-pdp | DEBU[2025-06-18T15:23:48.4595+00:00] Policy Is Allowed: abac policy-opa-pdp | DEBU[2025-06-18T15:23:48.4596+00:00] Validating properties data for policy: abac policy-opa-pdp | DEBU[2025-06-18T15:23:48.4597+00:00] Validating properties policy for policy: abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4597+00:00] Validation successful for policy: abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4600+00:00] Directory created: /opt/policies/abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4602+00:00] Policy file saved: /opt/policies/abac/policy.rego policy-opa-pdp | INFO[2025-06-18T15:23:48.4603+00:00] Directory created: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4603+00:00] Data file saved: /opt/data/node/abac/data.json policy-opa-pdp | DEBU[2025-06-18T15:23:48.4604+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-18T15:23:48.4853+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-18T15:23:48.4892+00:00] storage not found creating : /node/abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4895+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.abac" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "abac" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "abac", policy-opa-pdp | "policy-version": "1.0.7" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:23:48.4895+00:00] Loaded Policy: abac policy-opa-pdp | INFO[2025-06-18T15:23:48.4895+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-18T15:23:48.4896+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:23:48 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:23:48.4899+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09990849-dc24-4be1-b4a6-aaafc3a68826","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d5c8c4d2-2559-428c-97a0-f6bb46848850","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260228489","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:23:48.4899+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:23:48.4900+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:23:48.4985+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09990849-dc24-4be1-b4a6-aaafc3a68826","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d5c8c4d2-2559-428c-97a0-f6bb46848850","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260228489","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:23:48.4986+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:23:48.4986+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-18T15:24:11.5831+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-18T15:24:11.5832+00:00] datapath to get Data : /node/abac policy-opa-pdp | DEBU[2025-06-18T15:24:11.5834+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} policy-opa-pdp | DEBU[2025-06-18T15:24:11.5934+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:24:11.5937+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:24:11.5941+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:24:11.5943+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"8f711424-d231-48ec-acd6-e181d02cb949","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":650,"timer_rego_query_compile_ns":160552,"timer_rego_query_eval_ns":915242,"timer_rego_query_parse_ns":133851,"timer_sdk_decision_eval_ns":1444578},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-18T15:24:11Z","timestamp":"2025-06-18T15:24:11.594479599Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-18T15:24:11.5966+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "8f711424-d231-48ec-acd6-e181d02cb949", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:24:11.6058+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:24:11.6059+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:24:11.6063+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-18T15:24:11.6065+00:00] Policy Name abc does not exist policy-opa-pdp | DEBU[2025-06-18T15:24:11.6143+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-18T15:24:11.6143+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-18T15:24:11.6145+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-18T15:24:11.6146+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-18T15:24:11.6154+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "b683ebc6-a097-4304-9e8a-66d368da46af", policy-opa-pdp | {"decision_id":"b683ebc6-a097-4304-9e8a-66d368da46af","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"d7dc6f62-2789-4ca8-86f5-c3a301ac424f","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":490,"timer_rego_query_eval_ns":531026,"timer_sdk_decision_eval_ns":609457},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-18T15:24:11Z","timestamp":"2025-06-18T15:24:11.614647537Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:24:12.1192+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"447dbb53-d158-49a2-9c89-5d22eb91b982","timestampMs":1750260252099,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-18T15:24:12.1194+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-18T15:24:12.1197+00:00] PDP_UPDATE Message received: {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"447dbb53-d158-49a2-9c89-5d22eb91b982","timestampMs":1750260252099,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-18T15:24:12.1198+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-18T15:24:12.1199+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment policy-opa-pdp | DEBU[2025-06-18T15:24:12.1200+00:00] Deleting Policy from OPA : /abac policy-opa-pdp | DEBU[2025-06-18T15:24:12.1237+00:00] Removing policy directory: /opt/policies/abac policy-opa-pdp | DEBU[2025-06-18T15:24:12.1241+00:00] Deleting data from OPA : /node/abac policy-opa-pdp | DEBU[2025-06-18T15:24:12.1242+00:00] Analyzing dataPath: /node/abac policy-opa-pdp | DEBU[2025-06-18T15:24:12.1244+00:00] Path segments: [ node abac] policy-opa-pdp | DEBU[2025-06-18T15:24:12.1245+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac policy-opa-pdp | DEBU[2025-06-18T15:24:12.1247+00:00] Removing data directory: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-18T15:24:12.1251+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-18T15:24:12.1253+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-18T15:24:12.1255+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-18T15:24:12.1256+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/18 15:24:12 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-18T15:24:12.1258+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"447dbb53-d158-49a2-9c89-5d22eb91b982","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"72f4896d-e2cc-4065-a81c-86cde00bd9d3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260252125","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-18T15:24:12.1258+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-18T15:24:12.1259+00:00] 0 policy-opa-pdp | DEBU[2025-06-18T15:24:12.1332+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"447dbb53-d158-49a2-9c89-5d22eb91b982","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"72f4896d-e2cc-4065-a81c-86cde00bd9d3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260252125","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-18T15:24:12.1333+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-18T15:24:12.1334+00:00] discarding event of type PDP_STATUS policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.8:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-18T15:19:33.476+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 57 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-18T15:19:33.478+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-18T15:19:35.073+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-18T15:19:35.186+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 97 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-18T15:19:36.202+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-18T15:19:36.217+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-18T15:19:36.219+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-18T15:19:36.219+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-18T15:19:36.283+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-18T15:19:36.283+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2736 ms policy-pap | [2025-06-18T15:19:36.766+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-18T15:19:36.846+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-18T15:19:36.891+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-18T15:19:37.328+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-18T15:19:37.377+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-18T15:19:37.590+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@52bc6fcf policy-pap | [2025-06-18T15:19:37.593+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-18T15:19:37.695+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-18T15:19:39.798+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-18T15:19:39.802+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-18T15:19:41.107+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4635555d-36e7-41ae-9c08-89802cf0473f-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4635555d-36e7-41ae-9c08-89802cf0473f policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T15:19:41.163+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:41.309+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:41.310+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:41.310+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259981308 policy-pap | [2025-06-18T15:19:41.313+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-1, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T15:19:41.313+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T15:19:41.314+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:41.323+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:41.323+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:41.323+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259981323 policy-pap | [2025-06-18T15:19:41.323+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T15:19:41.657+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-18T15:19:41.788+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-18T15:19:41.865+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-18T15:19:42.075+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-18T15:19:42.870+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-18T15:19:42.985+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-18T15:19:43.004+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-18T15:19:43.026+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-18T15:19:43.027+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-18T15:19:43.027+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-18T15:19:43.028+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-18T15:19:43.028+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-18T15:19:43.028+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-18T15:19:43.028+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-18T15:19:43.030+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4635555d-36e7-41ae-9c08-89802cf0473f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7bf96c4e policy-pap | [2025-06-18T15:19:43.040+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4635555d-36e7-41ae-9c08-89802cf0473f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T15:19:43.041+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 4635555d-36e7-41ae-9c08-89802cf0473f policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T15:19:43.041+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:43.048+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:43.048+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:43.048+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259983048 policy-pap | [2025-06-18T15:19:43.048+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T15:19:43.049+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-18T15:19:43.049+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=79ddd3c0-d514-403e-a2f4-af1ca771a874, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@77d5a3ee policy-pap | [2025-06-18T15:19:43.049+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=79ddd3c0-d514-403e-a2f4-af1ca771a874, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T15:19:43.049+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-18T15:19:43.049+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259983055 policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=79ddd3c0-d514-403e-a2f4-af1ca771a874, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4635555d-36e7-41ae-9c08-89802cf0473f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-18T15:19:43.055+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f92a2222-74e0-469a-ae78-d93bbaab0593, alive=false, publisher=null]]: starting policy-pap | [2025-06-18T15:19:43.083+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-18T15:19:43.085+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:43.098+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-18T15:19:43.115+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:43.115+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:43.115+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259983115 policy-pap | [2025-06-18T15:19:43.116+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f92a2222-74e0-469a-ae78-d93bbaab0593, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-18T15:19:43.116+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=37f34247-37c1-4a9f-a919-ed9c40c5373b, alive=false, publisher=null]]: starting policy-pap | [2025-06-18T15:19:43.117+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-18T15:19:43.117+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-18T15:19:43.118+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-18T15:19:43.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-18T15:19:43.122+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-18T15:19:43.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750259983122 policy-pap | [2025-06-18T15:19:43.123+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=37f34247-37c1-4a9f-a919-ed9c40c5373b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-18T15:19:43.123+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-18T15:19:43.123+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-18T15:19:43.124+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-18T15:19:43.125+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-18T15:19:43.127+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-18T15:19:43.128+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-18T15:19:43.128+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-18T15:19:43.129+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-18T15:19:43.129+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-18T15:19:43.130+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-18T15:19:43.129+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-18T15:19:43.132+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.503 seconds (process running for 11.062) policy-pap | [2025-06-18T15:19:43.596+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: N2-PXQtqQN2FT0sdVSBvPw policy-pap | [2025-06-18T15:19:43.597+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: N2-PXQtqQN2FT0sdVSBvPw policy-pap | [2025-06-18T15:19:43.597+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-18T15:19:43.597+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: N2-PXQtqQN2FT0sdVSBvPw policy-pap | [2025-06-18T15:19:43.625+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-18T15:19:43.626+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-18T15:19:43.631+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:19:43.631+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Cluster ID: N2-PXQtqQN2FT0sdVSBvPw policy-pap | [2025-06-18T15:19:43.765+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-18T15:19:43.772+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:19:43.977+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:19:44.019+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:19:44.431+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:19:44.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-18T15:19:44.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-18T15:19:44.530+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550 policy-pap | [2025-06-18T15:19:44.530+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-18T15:19:45.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-18T15:19:45.402+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] (Re-)joining group policy-pap | [2025-06-18T15:19:45.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Request joining group due to: need to re-join with the given member-id: consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010 policy-pap | [2025-06-18T15:19:45.407+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] (Re-)joining group policy-pap | [2025-06-18T15:19:47.556+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550', protocol='range'} policy-pap | [2025-06-18T15:19:47.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-18T15:19:47.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-c38ebaac-579a-4eb3-8a41-6aab48ee8550', protocol='range'} policy-pap | [2025-06-18T15:19:47.616+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-18T15:19:47.621+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-18T15:19:47.640+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-18T15:19:47.662+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-18T15:19:48.414+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010', protocol='range'} policy-pap | [2025-06-18T15:19:48.415+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Finished assignment for group at generation 1: {consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-18T15:19:48.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3-792d4223-9c2d-40a0-8654-1fc390f5a010', protocol='range'} policy-pap | [2025-06-18T15:19:48.421+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-18T15:19:48.421+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-18T15:19:48.423+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-18T15:19:48.425+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4635555d-36e7-41ae-9c08-89802cf0473f-3, groupId=4635555d-36e7-41ae-9c08-89802cf0473f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-18T15:20:41.636+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-18T15:20:41.636+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-18T15:20:41.639+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms policy-pap | [2025-06-18T15:21:38.798+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-18T15:21:38.798+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b5423181-f08c-41fb-90bd-99e43f7bd824","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750260098748","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:38.798+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b5423181-f08c-41fb-90bd-99e43f7bd824","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750260098748","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:38.806+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-18T15:21:39.351+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:21:39.352+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:21:39.352+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:21:39.353+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=476caf7f-7f1e-41ed-9e1e-8331cff4928a, expireMs=1750260129353] policy-pap | [2025-06-18T15:21:39.355+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:21:39.355+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=476caf7f-7f1e-41ed-9e1e-8331cff4928a, expireMs=1750260129353] policy-pap | [2025-06-18T15:21:39.356+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:21:39.362+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","timestampMs":1750260099325,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.437+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","timestampMs":1750260099325,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.438+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:21:39.444+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","timestampMs":1750260099325,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.444+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:21:39.479+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"63147583-0a4b-408a-835b-ea90d1414001","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099463","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.480+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 476caf7f-7f1e-41ed-9e1e-8331cff4928a policy-pap | [2025-06-18T15:21:39.482+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"476caf7f-7f1e-41ed-9e1e-8331cff4928a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"63147583-0a4b-408a-835b-ea90d1414001","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099463","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:21:39.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:21:39.483+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:21:39.484+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=476caf7f-7f1e-41ed-9e1e-8331cff4928a, expireMs=1750260129353] policy-pap | [2025-06-18T15:21:39.484+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:21:39.484+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:21:39.499+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T15:21:39.500+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:21:39.500+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 start publishing next request policy-pap | [2025-06-18T15:21:39.500+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange starting policy-pap | [2025-06-18T15:21:39.500+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange starting listener policy-pap | [2025-06-18T15:21:39.500+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange starting timer policy-pap | [2025-06-18T15:21:39.501+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=0d469264-614a-4e62-a41a-855166b5b769, expireMs=1750260129501] policy-pap | [2025-06-18T15:21:39.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange starting enqueue policy-pap | [2025-06-18T15:21:39.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange started policy-pap | [2025-06-18T15:21:39.501+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=0d469264-614a-4e62-a41a-855166b5b769, expireMs=1750260129501] policy-pap | [2025-06-18T15:21:39.501+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0d469264-614a-4e62-a41a-855166b5b769","timestampMs":1750260099326,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.521+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0d469264-614a-4e62-a41a-855166b5b769","timestampMs":1750260099326,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.521+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-18T15:21:39.533+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-18T15:21:39.533+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"0d469264-614a-4e62-a41a-855166b5b769","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"e118adbd-54c6-4af9-ad9a-f07be5e23a48","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099515","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.534+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 0d469264-614a-4e62-a41a-855166b5b769 policy-pap | [2025-06-18T15:21:39.857+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"0d469264-614a-4e62-a41a-855166b5b769","timestampMs":1750260099326,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.858+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-18T15:21:39.865+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"0d469264-614a-4e62-a41a-855166b5b769","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"e118adbd-54c6-4af9-ad9a-f07be5e23a48","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099515","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange stopping policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange stopping enqueue policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange stopping timer policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=0d469264-614a-4e62-a41a-855166b5b769, expireMs=1750260129501] policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange stopping listener policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange stopped policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpStateChange successful policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 start publishing next request policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:21:39.866+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:21:39.867+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:21:39.867+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=87566c01-95a8-46a9-9868-b7451d94a3a9, expireMs=1750260129867] policy-pap | [2025-06-18T15:21:39.867+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:21:39.867+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:21:39.868+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"87566c01-95a8-46a9-9868-b7451d94a3a9","timestampMs":1750260099847,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.878+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"87566c01-95a8-46a9-9868-b7451d94a3a9","timestampMs":1750260099847,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.879+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:21:39.880+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"87566c01-95a8-46a9-9868-b7451d94a3a9","timestampMs":1750260099847,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:21:39.880+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:21:39.890+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"87566c01-95a8-46a9-9868-b7451d94a3a9","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"bb51e931-86e5-4653-97da-252f45dc0e8b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099876","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.890+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:21:39.890+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:21:39.890+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:21:39.890+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=87566c01-95a8-46a9-9868-b7451d94a3a9, expireMs=1750260129867] policy-pap | [2025-06-18T15:21:39.891+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:21:39.891+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:21:39.900+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:21:39.900+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:21:39.901+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"87566c01-95a8-46a9-9868-b7451d94a3a9","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"bb51e931-86e5-4653-97da-252f45dc0e8b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260099876","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:21:39.901+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 87566c01-95a8-46a9-9868-b7451d94a3a9 policy-pap | [2025-06-18T15:21:43.130+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-18T15:22:09.354+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=476caf7f-7f1e-41ed-9e1e-8331cff4928a, expireMs=1750260129353] policy-pap | [2025-06-18T15:22:09.501+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=0d469264-614a-4e62-a41a-855166b5b769, expireMs=1750260129501] policy-pap | [2025-06-18T15:22:38.757+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"f4280f1a-f2b0-4719-8ad1-603b801e230b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260158742","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:22:38.758+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"f4280f1a-f2b0-4719-8ad1-603b801e230b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260158742","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:22:38.758+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-18T15:22:56.882+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-18T15:22:56.883+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-18T15:22:56.884+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy zoneB 1.0.6 policy-pap | [2025-06-18T15:22:56.885+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=1 policy-pap | [2025-06-18T15:22:56.886+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup policy-pap | [2025-06-18T15:22:56.886+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup policy-pap | [2025-06-18T15:22:56.901+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-18T15:22:56Z, user=policyadmin)] policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=9824476a-97d1-4e30-ac45-d9064a937d1e, expireMs=1750260206933] policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:22:56.933+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=9824476a-97d1-4e30-ac45-d9064a937d1e, expireMs=1750260206933] policy-pap | [2025-06-18T15:22:56.934+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9824476a-97d1-4e30-ac45-d9064a937d1e","timestampMs":1750260176885,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:22:56.941+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9824476a-97d1-4e30-ac45-d9064a937d1e","timestampMs":1750260176885,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:22:56.941+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:22:56.943+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9824476a-97d1-4e30-ac45-d9064a937d1e","timestampMs":1750260176885,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:22:56.943+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:22:56.984+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9824476a-97d1-4e30-ac45-d9064a937d1e","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"17094e91-59c9-451b-8f86-c5144a657cdf","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260176973","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9824476a-97d1-4e30-ac45-d9064a937d1e","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"17094e91-59c9-451b-8f86-c5144a657cdf","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260176973","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9824476a-97d1-4e30-ac45-d9064a937d1e, expireMs=1750260206933] policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:22:56.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:22:56.986+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9824476a-97d1-4e30-ac45-d9064a937d1e policy-pap | [2025-06-18T15:22:56.996+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:22:56.996+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:22:56.997+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T15:23:21.492+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-18T15:23:21.494+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-8] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-18T15:23:21.494+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering an undeploy for policy zoneB 1.0.6 policy-pap | [2025-06-18T15:23:21.494+00:00|INFO|SessionData|http-nio-6969-exec-8] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=0 policy-pap | [2025-06-18T15:23:21.494+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group opaGroup policy-pap | [2025-06-18T15:23:21.494+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group opaGroup policy-pap | [2025-06-18T15:23:21.510+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-18T15:23:21Z, user=policyadmin)] policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|TimerManager|http-nio-6969-exec-8] update timer registered Timer [name=933926b5-d92c-4a50-a058-dcd94cbaa465, expireMs=1750260231533] policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:23:21.533+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:23:21.534+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"933926b5-d92c-4a50-a058-dcd94cbaa465","timestampMs":1750260201494,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:21.555+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"933926b5-d92c-4a50-a058-dcd94cbaa465","timestampMs":1750260201494,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:21.555+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:21.562+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"933926b5-d92c-4a50-a058-dcd94cbaa465","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b17e16c2-4e0f-4a82-972e-8f59fc53d936","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260201549","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:21.562+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 933926b5-d92c-4a50-a058-dcd94cbaa465 policy-pap | [2025-06-18T15:23:21.568+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"933926b5-d92c-4a50-a058-dcd94cbaa465","timestampMs":1750260201494,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:21.569+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:21.580+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"933926b5-d92c-4a50-a058-dcd94cbaa465","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"b17e16c2-4e0f-4a82-972e-8f59fc53d936","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260201549","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=933926b5-d92c-4a50-a058-dcd94cbaa465, expireMs=1750260231533] policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:23:21.581+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:23:21.604+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:23:21.604+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:23:21.604+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-18T15:23:22.021+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-18T15:23:22.024+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-9] failed to undeploy policy: zoneB null policy-pap | [2025-06-18T15:23:22.024+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-9] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-10] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy vehicle 1.0.6 policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|SessionData|http-nio-6969-exec-10] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=1 policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group opaGroup policy-pap | [2025-06-18T15:23:22.809+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group opaGroup policy-pap | [2025-06-18T15:23:22.816+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-18T15:23:22Z, user=policyadmin)] policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|TimerManager|http-nio-6969-exec-10] update timer registered Timer [name=f460da34-916d-42bf-8d57-49ea1b0f96d7, expireMs=1750260232825] policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:23:22.825+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f460da34-916d-42bf-8d57-49ea1b0f96d7","timestampMs":1750260202809,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:22.835+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f460da34-916d-42bf-8d57-49ea1b0f96d7","timestampMs":1750260202809,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:22.835+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:22.836+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"f460da34-916d-42bf-8d57-49ea1b0f96d7","timestampMs":1750260202809,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:22.836+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:22.880+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f460da34-916d-42bf-8d57-49ea1b0f96d7","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"fb19dab3-9bd0-45f3-8fdb-5b5e7801ab4b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260202867","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:22.881+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f460da34-916d-42bf-8d57-49ea1b0f96d7 policy-pap | [2025-06-18T15:23:22.882+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"f460da34-916d-42bf-8d57-49ea1b0f96d7","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"fb19dab3-9bd0-45f3-8fdb-5b5e7801ab4b","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260202867","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:22.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:23:22.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:23:22.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:23:22.883+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f460da34-916d-42bf-8d57-49ea1b0f96d7, expireMs=1750260232825] policy-pap | [2025-06-18T15:23:22.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:23:22.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:23:22.892+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T15:23:22.893+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:23:22.893+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:23:26.934+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=9824476a-97d1-4e30-ac45-d9064a937d1e, expireMs=1750260206933] policy-pap | [2025-06-18T15:23:39.492+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d911ab4e-4565-4b49-9bec-99386456b326","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260219478","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:39.493+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d911ab4e-4565-4b49-9bec-99386456b326","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260219478","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:39.493+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-18T15:23:43.142+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-18T15:23:47.275+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup policy-pap | [2025-06-18T15:23:47.275+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-18T15:23:47.275+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 policy-pap | [2025-06-18T15:23:47.276+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=0 policy-pap | [2025-06-18T15:23:47.276+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup policy-pap | [2025-06-18T15:23:47.276+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup policy-pap | [2025-06-18T15:23:47.284+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-18T15:23:47Z, user=policyadmin)] policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=baee5c1b-ec23-40fa-9b12-d4ff77366cce, expireMs=1750260257292] policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=baee5c1b-ec23-40fa-9b12-d4ff77366cce, expireMs=1750260257292] policy-pap | [2025-06-18T15:23:47.292+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:23:47.293+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","timestampMs":1750260227276,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:47.302+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","timestampMs":1750260227276,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:47.302+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:47.307+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","timestampMs":1750260227276,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:47.307+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:47.314+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"1bfce4f5-882f-4bce-a749-ee5d49bcb000","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260227304","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:47.314+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id baee5c1b-ec23-40fa-9b12-d4ff77366cce policy-pap | [2025-06-18T15:23:47.315+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"baee5c1b-ec23-40fa-9b12-d4ff77366cce","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"1bfce4f5-882f-4bce-a749-ee5d49bcb000","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260227304","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=baee5c1b-ec23-40fa-9b12-d4ff77366cce, expireMs=1750260257292] policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:23:47.316+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:23:47.324+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:23:47.324+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:23:47.324+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-18T15:23:47.684+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup policy-pap | [2025-06-18T15:23:47.684+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-3] failed to undeploy policy: vehicle null policy-pap | [2025-06-18T15:23:47.684+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-3] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=1 policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup policy-pap | [2025-06-18T15:23:48.431+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup policy-pap | [2025-06-18T15:23:48.440+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-18T15:23:48Z, user=policyadmin)] policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=09990849-dc24-4be1-b4a6-aaafc3a68826, expireMs=1750260258448] policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:23:48.448+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:23:48.449+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09990849-dc24-4be1-b4a6-aaafc3a68826","timestampMs":1750260228431,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:48.458+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09990849-dc24-4be1-b4a6-aaafc3a68826","timestampMs":1750260228431,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:48.458+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:48.461+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"09990849-dc24-4be1-b4a6-aaafc3a68826","timestampMs":1750260228431,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:23:48.461+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:23:48.503+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09990849-dc24-4be1-b4a6-aaafc3a68826","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d5c8c4d2-2559-428c-97a0-f6bb46848850","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260228489","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:48.503+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"09990849-dc24-4be1-b4a6-aaafc3a68826","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"d5c8c4d2-2559-428c-97a0-f6bb46848850","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260228489","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:23:48.503+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 09990849-dc24-4be1-b4a6-aaafc3a68826 policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=09990849-dc24-4be1-b4a6-aaafc3a68826, expireMs=1750260258448] policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:23:48.504+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:23:48.517+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-18T15:23:48.516+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:23:48.519+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:24:12.098+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup policy-pap | [2025-06-18T15:24:12.099+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 policy-pap | [2025-06-18T15:24:12.099+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy abac 1.0.7 policy-pap | [2025-06-18T15:24:12.099+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 opaGroup opa policies=0 policy-pap | [2025-06-18T15:24:12.099+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup policy-pap | [2025-06-18T15:24:12.099+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup policy-pap | [2025-06-18T15:24:12.106+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-18T15:24:12Z, user=policyadmin)] policy-pap | [2025-06-18T15:24:12.113+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting listener policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting timer policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=447dbb53-d158-49a2-9c89-5d22eb91b982, expireMs=1750260282114] policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate starting enqueue policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate started policy-pap | [2025-06-18T15:24:12.114+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"447dbb53-d158-49a2-9c89-5d22eb91b982","timestampMs":1750260252099,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:24:12.122+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"447dbb53-d158-49a2-9c89-5d22eb91b982","timestampMs":1750260252099,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:24:12.122+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:24:12.123+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-f90d4108-ebf7-469c-8b25-9caf5f307b54","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"447dbb53-d158-49a2-9c89-5d22eb91b982","timestampMs":1750260252099,"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-18T15:24:12.123+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-18T15:24:12.135+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"447dbb53-d158-49a2-9c89-5d22eb91b982","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"72f4896d-e2cc-4065-a81c-86cde00bd9d3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260252125","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:24:12.136+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 447dbb53-d158-49a2-9c89-5d22eb91b982 policy-pap | [2025-06-18T15:24:12.138+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"447dbb53-d158-49a2-9c89-5d22eb91b982","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6","requestId":"72f4896d-e2cc-4065-a81c-86cde00bd9d3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750260252125","deploymentInstanceInfo":""} policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping enqueue policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping timer policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=447dbb53-d158-49a2-9c89-5d22eb91b982, expireMs=1750260282114] policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopping listener policy-pap | [2025-06-18T15:24:12.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate stopped policy-pap | [2025-06-18T15:24:12.148+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 PdpUpdate successful policy-pap | [2025-06-18T15:24:12.149+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-66d2b31b-f699-4e81-bc5a-1c1c2acffcc6 has no more requests policy-pap | [2025-06-18T15:24:12.149+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-18T15:24:12.444+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-18T15:24:12.445+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-7] failed to undeploy policy: abac null policy-pap | [2025-06-18T15:24:12.445+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-7] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-18T15:24:17.293+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=baee5c1b-ec23-40fa-9b12-d4ff77366cce, expireMs=1750260257292] postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-18 15:19:05.984 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-18 15:19:05.987 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-18 15:19:05.992 UTC [51] LOG: database system was shut down at 2025-06-18 15:19:05 UTC postgres | 2025-06-18 15:19:05.999 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down....2025-06-18 15:19:07.206 UTC [48] LOG: received fast shutdown request postgres | 2025-06-18 15:19:07.208 UTC [48] LOG: aborting any active transactions postgres | 2025-06-18 15:19:07.209 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-18 15:19:07.211 UTC [49] LOG: shutting down postgres | 2025-06-18 15:19:07.213 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-18 15:19:07.723 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.371 s, sync=0.132 s, total=0.512 s; sync files=1788, longest=0.013 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-18 15:19:07.739 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-18 15:19:07.841 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-18 15:19:07.841 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-18 15:19:07.841 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-18 15:19:07.844 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-18 15:19:07.849 UTC [101] LOG: database system was shut down at 2025-06-18 15:19:07 UTC postgres | 2025-06-18 15:19:07.855 UTC [1] LOG: database system is ready to accept connections postgres | 2025-06-18 15:24:07.923 UTC [99] LOG: checkpoint starting: time postgres | 2025-06-18 15:25:12.631 UTC [99] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.677 s, sync=0.021 s, total=64.709 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/3150318, redo lsn=0/314DDE0 prometheus | time=2025-06-18T15:19:04.118Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-18T15:19:04.118Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-18T15:19:04.118Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-18T15:19:04.119Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-18T15:19:04.122Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-18T15:19:04.123Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-18T15:19:04.127Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-18T15:19:04.127Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-18T15:19:04.129Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-18T15:19:04.129Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.51µs prometheus | time=2025-06-18T15:19:04.129Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-18T15:19:04.131Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=1.231328ms prometheus | time=2025-06-18T15:19:04.131Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=42.36µs wal_replay_duration=1.263309ms wbl_replay_duration=140ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.51µs total_replay_duration=1.389081ms prometheus | time=2025-06-18T15:19:04.135Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-18T15:19:04.135Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-18T15:19:04.135Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-18T15:19:04.137Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-18T15:19:04.137Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.11µs remote_storage=1.93µs web_handler=510ns query_engine=1.54µs scrape=269.394µs scrape_sd=558.788µs notify=210.504µs notify_sd=25.62µs rules=2.6µs tracing=8.98µs filename=/etc/prometheus/prometheus.yml totalDuration=1.936399ms prometheus | time=2025-06-18T15:19:04.137Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-18T15:19:04.137Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-18 15:19:09,447] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,450] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,450] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,450] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,450] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,451] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 15:19:09,451] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 15:19:09,451] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-18 15:19:09,451] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-18 15:19:09,453] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-18 15:19:09,453] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,454] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,454] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,454] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,454] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-18 15:19:09,454] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-18 15:19:09,466] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-18 15:19:09,469] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-18 15:19:09,469] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-18 15:19:09,471] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 15:19:09,479] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,479] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,479] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,479] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,479] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,480] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,480] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,480] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,480] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,480] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,481] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,481] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,481] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,481] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,482] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,484] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,485] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,485] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,485] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,485] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,485] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,486] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-18 15:19:09,487] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,487] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,488] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-18 15:19:09,488] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,489] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-18 15:19:09,492] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,492] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,492] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-18 15:19:09,492] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-18 15:19:09,492] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,521] INFO Logging initialized @495ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-18 15:19:09,599] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 15:19:09,599] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 15:19:09,616] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-18 15:19:09,669] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 15:19:09,669] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 15:19:09,671] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-18 15:19:09,674] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-18 15:19:09,685] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-18 15:19:09,695] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-18 15:19:09,695] INFO Started @673ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-18 15:19:09,696] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-18 15:19:09,699] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-18 15:19:09,700] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-18 15:19:09,701] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-18 15:19:09,702] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-18 15:19:09,714] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-18 15:19:09,714] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-18 15:19:09,714] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 15:19:09,714] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 15:19:09,719] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-18 15:19:09,719] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 15:19:09,723] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-18 15:19:09,723] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-18 15:19:09,724] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-18 15:19:09,731] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-18 15:19:09,733] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-18 15:19:09,748] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-18 15:19:09,749] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-18 15:19:12,017] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container policy-opa-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container grafana Stopping Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-opa-pdp Stopped Container policy-opa-pdp Removing Container policy-opa-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2076 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins3739083844045227096.sh ---> sysstat.sh [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins18433079882687847696.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins8487258646167239509.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OWna from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-OWna/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins5016435608486585794.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config10561910296703023390tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins16345674871588069408.sh ---> create-netrc.sh [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins5828775023856807610.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OWna from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-OWna/bin to PATH [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6264686734168611158.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins7818554744560107725.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OWna from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-OWna/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins176405575408150226.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OWna from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-OWna/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/162 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22131 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 900 24030 0 7235 30810 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:e5:fc:2e brd ff:ff:ff:ff:ff:ff inet 10.30.106.75/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85826sec preferred_lft 85826sec inet6 fe80::f816:3eff:fee5:fc2e/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:e0:09:a6:24 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:e0ff:fe09:a624/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22131) 06/18/25 _x86_64_ (8 CPU) 15:16:59 LINUX RESTART (8 CPU) 15:17:01 tps rtps wtps bread/s bwrtn/s 15:18:02 373.31 74.86 298.45 5340.35 97183.87 15:19:01 536.84 21.22 515.62 2337.37 241332.25 15:20:01 344.63 5.80 338.83 418.86 73790.23 15:21:01 3.73 0.00 3.73 0.00 92.25 15:22:01 6.22 0.02 6.20 0.13 146.24 15:23:01 218.58 0.45 218.13 49.85 33888.17 15:24:01 6.45 0.00 6.45 0.00 161.31 15:25:01 9.45 0.00 9.45 0.00 262.36 15:26:01 57.16 0.97 56.19 57.86 1144.74 Average: 172.26 11.46 160.80 909.02 49423.20 15:17:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:18:02 30134772 31590188 2804448 8.51 56936 1717040 1481260 4.36 957548 1569412 110236 15:19:01 25260800 31617680 7678420 23.31 148132 6300732 1709736 5.03 1033748 6076796 494760 15:20:01 23294516 29963540 9644704 29.28 163616 6598104 7442404 21.90 2901600 6094256 40 15:21:01 23364348 30019032 9574872 29.07 163764 6584380 7537420 22.18 2844912 6078696 168 15:22:01 23351452 30007316 9587768 29.11 164004 6585860 7579144 22.30 2859772 6077920 404 15:23:01 22713040 29907780 10226180 31.05 204412 7031584 7873372 23.17 3075388 6443552 1928 15:24:01 22700136 29896120 10239084 31.08 204496 7032352 7911392 23.28 3090968 6437916 568 15:25:01 22695532 29891952 10243688 31.10 204628 7032552 7926948 23.32 3096116 6437724 580 15:26:01 24669300 31608288 8269920 25.11 205552 6770256 1559476 4.59 1439268 6197380 28808 Average: 24242655 30500211 8696565 26.40 168393 6183651 5669017 16.68 2366591 5712628 70832 15:17:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:18:02 ens3 493.47 338.97 1676.68 81.59 0.00 0.00 0.00 0.00 15:18:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:18:02 lo 1.93 1.93 0.20 0.20 0.00 0.00 0.00 0.00 15:19:01 ens3 1210.79 714.25 33729.32 62.78 0.00 0.00 0.00 0.00 15:19:01 br-2f9e305b65d5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:19:01 lo 14.03 14.03 1.30 1.30 0.00 0.00 0.00 0.00 15:20:01 ens3 64.49 46.63 311.11 6.34 0.00 0.00 0.00 0.00 15:20:01 veth25339c6 150.31 172.89 27.86 26.74 0.00 0.00 0.00 0.00 15:20:01 br-2f9e305b65d5 43.33 59.34 2.57 309.23 0.00 0.00 0.00 0.00 15:20:01 vethf038318 0.27 0.65 0.02 0.46 0.00 0.00 0.00 0.00 15:21:01 ens3 1.13 1.08 0.13 0.34 0.00 0.00 0.00 0.00 15:21:01 veth25339c6 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 15:21:01 br-2f9e305b65d5 0.48 0.30 0.03 0.02 0.00 0.00 0.00 0.00 15:21:01 vethf038318 0.37 0.37 0.04 1.00 0.00 0.00 0.00 0.00 15:22:01 ens3 0.73 0.63 0.11 0.05 0.00 0.00 0.00 0.00 15:22:01 veth25339c6 98.40 98.87 24.73 11.20 0.00 0.00 0.00 0.00 15:22:01 br-2f9e305b65d5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:22:01 vethf038318 0.58 0.57 0.06 1.21 0.00 0.00 0.00 0.00 15:23:01 ens3 245.18 166.98 2203.02 14.45 0.00 0.00 0.00 0.00 15:23:01 veth25339c6 165.53 166.38 40.93 18.17 0.00 0.00 0.00 0.00 15:23:01 br-2f9e305b65d5 0.25 0.27 0.02 0.02 0.00 0.00 0.00 0.00 15:23:01 vethf038318 0.62 0.70 0.06 1.22 0.00 0.00 0.00 0.00 15:24:01 ens3 0.83 0.70 0.18 0.28 0.00 0.00 0.00 0.00 15:24:01 veth25339c6 544.79 547.34 132.49 58.97 0.00 0.00 0.00 0.01 15:24:01 br-2f9e305b65d5 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:24:01 vethf038318 0.60 0.60 0.06 1.28 0.00 0.00 0.00 0.00 15:25:01 ens3 1.20 0.75 0.25 0.33 0.00 0.00 0.00 0.00 15:25:01 veth25339c6 139.01 139.64 33.61 14.96 0.00 0.00 0.00 0.00 15:25:01 br-2f9e305b65d5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:25:01 vethf038318 0.62 0.60 0.06 1.29 0.00 0.00 0.00 0.00 15:26:01 ens3 43.98 35.48 64.70 31.16 0.00 0.00 0.00 0.00 15:26:01 docker0 134.66 184.45 8.56 1353.65 0.00 0.00 0.00 0.00 15:26:01 lo 30.39 30.39 2.68 2.68 0.00 0.00 0.00 0.00 Average: ens3 227.27 144.00 4165.79 21.85 0.00 0.00 0.00 0.00 Average: docker0 14.99 20.53 0.95 150.68 0.00 0.00 0.00 0.00 Average: lo 3.07 3.07 0.27 0.27 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22131) 06/18/25 _x86_64_ (8 CPU) 15:16:59 LINUX RESTART (8 CPU) 15:17:01 CPU %user %nice %system %iowait %steal %idle 15:18:02 all 8.23 0.00 1.31 2.29 0.04 88.14 15:18:02 0 4.38 0.00 1.13 1.97 0.07 92.45 15:18:02 1 5.12 0.00 0.88 0.28 0.03 93.68 15:18:02 2 6.49 0.00 0.83 2.39 0.02 90.27 15:18:02 3 9.10 0.00 1.86 0.62 0.03 88.40 15:18:02 4 16.83 0.00 2.51 4.52 0.05 76.10 15:18:02 5 5.55 0.00 0.87 7.94 0.03 85.61 15:18:02 6 10.04 0.00 1.12 0.33 0.03 88.48 15:18:02 7 8.33 0.00 1.29 0.28 0.02 90.08 15:19:01 all 19.27 0.00 7.54 4.87 0.07 68.25 15:19:01 0 15.59 0.00 7.77 2.99 0.05 73.60 15:19:01 1 14.59 0.00 6.86 2.88 0.05 75.62 15:19:01 2 15.38 0.00 5.86 0.75 0.05 77.96 15:19:01 3 14.64 0.00 7.00 1.61 0.07 76.69 15:19:01 4 25.27 0.00 9.73 18.43 0.10 46.48 15:19:01 5 14.54 0.00 6.77 8.37 0.07 70.26 15:19:01 6 20.25 0.00 7.84 2.14 0.07 69.70 15:19:01 7 33.88 0.00 8.47 1.78 0.07 55.80 15:20:01 all 26.49 0.00 3.71 1.68 0.09 68.03 15:20:01 0 26.12 0.00 3.57 0.65 0.08 69.57 15:20:01 1 25.22 0.00 3.66 1.52 0.08 69.51 15:20:01 2 29.44 0.00 3.70 0.67 0.08 66.10 15:20:01 3 29.96 0.00 3.72 2.16 0.08 64.08 15:20:01 4 22.83 0.00 3.40 2.49 0.10 71.18 15:20:01 5 32.28 0.00 4.34 1.66 0.10 61.62 15:20:01 6 25.79 0.00 3.77 1.12 0.10 69.22 15:20:01 7 20.19 0.00 3.56 3.21 0.07 72.98 15:21:01 all 1.29 0.00 0.22 0.01 0.04 98.43 15:21:01 0 1.62 0.00 0.27 0.02 0.07 98.03 15:21:01 1 1.27 0.00 0.22 0.00 0.05 98.46 15:21:01 2 1.30 0.00 0.18 0.00 0.03 98.48 15:21:01 3 0.75 0.00 0.13 0.00 0.02 99.10 15:21:01 4 1.16 0.00 0.28 0.02 0.07 98.47 15:21:01 5 2.09 0.00 0.30 0.00 0.07 97.55 15:21:01 6 1.02 0.00 0.15 0.00 0.02 98.82 15:21:01 7 1.14 0.00 0.17 0.07 0.05 98.58 15:22:01 all 2.21 0.00 0.35 0.04 0.04 97.36 15:22:01 0 2.10 0.00 0.30 0.07 0.05 97.48 15:22:01 1 2.10 0.00 0.23 0.00 0.03 97.63 15:22:01 2 3.59 0.00 0.40 0.00 0.03 95.98 15:22:01 3 1.95 0.00 0.33 0.02 0.03 97.66 15:22:01 4 1.37 0.00 0.30 0.00 0.05 98.27 15:22:01 5 2.46 0.00 0.28 0.02 0.03 97.21 15:22:01 6 2.02 0.00 0.55 0.00 0.03 97.40 15:22:01 7 2.09 0.00 0.38 0.18 0.07 97.28 15:23:01 all 9.69 0.00 2.87 0.97 0.06 86.41 15:23:01 0 8.17 0.00 2.81 0.12 0.07 88.83 15:23:01 1 13.07 0.00 3.77 2.76 0.07 80.33 15:23:01 2 8.63 0.00 2.78 0.27 0.07 88.26 15:23:01 3 10.39 0.00 3.47 0.23 0.07 85.83 15:23:01 4 6.76 0.00 1.39 0.05 0.08 91.71 15:23:01 5 8.92 0.00 2.36 0.15 0.07 88.50 15:23:01 6 10.28 0.00 3.02 0.25 0.05 86.40 15:23:01 7 11.26 0.00 3.32 3.96 0.05 81.40 15:24:01 all 3.44 0.00 0.66 0.03 0.05 95.83 15:24:01 0 2.67 0.00 0.70 0.02 0.05 96.56 15:24:01 1 3.76 0.00 0.55 0.03 0.03 95.63 15:24:01 2 3.24 0.00 0.57 0.00 0.07 96.12 15:24:01 3 3.32 0.00 0.43 0.00 0.03 96.21 15:24:01 4 3.85 0.00 0.61 0.00 0.05 95.49 15:24:01 5 3.97 0.00 0.48 0.02 0.03 95.49 15:24:01 6 3.51 0.00 0.74 0.02 0.05 95.69 15:24:01 7 3.17 0.00 1.15 0.12 0.05 95.51 15:25:01 all 1.36 0.00 0.24 0.02 0.05 98.33 15:25:01 0 1.72 0.00 0.20 0.00 0.03 98.05 15:25:01 1 1.25 0.00 0.15 0.02 0.05 98.53 15:25:01 2 1.00 0.00 0.35 0.02 0.05 98.58 15:25:01 3 0.85 0.00 0.20 0.00 0.07 98.88 15:25:01 4 2.55 0.00 0.15 0.02 0.03 97.25 15:25:01 5 1.68 0.00 0.32 0.02 0.05 97.93 15:25:01 6 0.97 0.00 0.35 0.03 0.07 98.58 15:25:01 7 0.80 0.00 0.22 0.10 0.07 98.82 15:26:01 all 3.80 0.00 0.85 0.12 0.04 95.19 15:26:01 0 1.99 0.00 0.82 0.03 0.03 97.13 15:26:01 1 2.27 0.00 0.85 0.05 0.03 96.80 15:26:01 2 13.80 0.00 1.22 0.10 0.05 84.83 15:26:01 3 3.05 0.00 0.78 0.02 0.05 96.09 15:26:01 4 1.41 0.00 0.64 0.02 0.03 97.91 15:26:01 5 4.79 0.00 0.95 0.05 0.02 94.19 15:26:01 6 1.02 0.00 0.70 0.07 0.05 98.16 15:26:01 7 2.05 0.00 0.85 0.60 0.03 96.46 Average: all 8.38 0.00 1.95 1.10 0.05 88.51 Average: 0 7.12 0.00 1.94 0.65 0.06 90.24 Average: 1 7.60 0.00 1.89 0.83 0.05 89.63 Average: 2 9.19 0.00 1.75 0.47 0.05 88.54 Average: 3 8.20 0.00 1.98 0.51 0.05 89.26 Average: 4 9.05 0.00 2.09 2.79 0.06 86.00 Average: 5 8.45 0.00 1.84 2.00 0.05 87.66 Average: 6 8.28 0.00 2.01 0.44 0.05 89.22 Average: 7 9.14 0.00 2.14 1.14 0.05 87.54