Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22753 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-uBoHpPhunwq9/agent.2098 SSH_AGENT_PID=2100 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_9966112706751815899.key (/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_9966112706751815899.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 Commit message: "Add missing delete composition in CSIT" > git rev-list --no-walk 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins9429535541527332406.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-N0J9 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-N0J9/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-N0J9/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.41 botocore==1.38.41 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh /tmp/jenkins3745826417190946724.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh -xe /tmp/jenkins2617794379009172104.sh + /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/run-project-csit.sh policy-opa-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 89 60.2M 89 53.8M 0 0 69.6M 0 --:--:-- --:--:-- --:--:-- 69.6M 100 60.2M 100 60.2M 0 0 72.7M 0 --:--:-- --:--:-- --:--:-- 117M Setting project configuration for: policy-opa-pdp Configuring docker compose... Starting opa-pdp using postgres + Grafana/Prometheus grafana Pulling opa-pdp Pulling pap Pulling zookeeper Pulling policy-db-migrator Pulling api Pulling postgres Pulling prometheus Pulling kafka Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 684be6598fc9 Waiting eb7cda286a15 Waiting 0d92cad902ba Waiting dcc0c3b2850c Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer e5d7009d9e55 Waiting c124ba1a8b26 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Download complete 1ec5fb03eaee Waiting 6394804c2196 Waiting d3165a332ae3 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB f90c8eb4724c Pulling fs layer 2b1b549e99de Pulling fs layer 547372ea8ffa Pulling fs layer 65d25c0f02f3 Pulling fs layer 90dd78f85976 Pulling fs layer 4f4fb700ef54 Pulling fs layer f90c8eb4724c Waiting 2b1b549e99de Waiting 547372ea8ffa Waiting 65d25c0f02f3 Waiting 90dd78f85976 Waiting 4f4fb700ef54 Waiting eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Download complete f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer f18232174bc9 Waiting 3f8d5c908dcc Waiting 30bb92ff0608 Waiting 807a2e881ecd Waiting 4a4d0948b0bf Waiting 04f6155c873d Waiting 85dde7dceb0a Waiting 7009d5001b77 Waiting 538deb30e80c Waiting 9183b65e90ee Waiting 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer b0e0ef7895f4 Waiting c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting 40a5eed61bb0 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting 1e017ebebdbd Waiting 55f2b468da67 Waiting 82bfc142787e Waiting 46baca71a4ef Waiting 356f5c2c843b Waiting eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer e444bcd4d577 Waiting 2e8a7df9c2ee Pulling fs layer 8f10199ed94b Waiting eca0188f477e Waiting f963a77d2726 Waiting eabd8714fec9 Waiting 45fd2fec8a19 Waiting 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer f3a82e9f1761 Waiting 9c266ba63f51 Waiting 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer c955f6e31a04 Pulling fs layer 2e8a7df9c2ee Waiting da3ed5db7103 Waiting 10f05dd8b1db Waiting c955f6e31a04 Waiting 41dac8b43ba6 Waiting 71a9f6a9ab4d Waiting 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 1617e25568b2 Waiting 6ac0e4adf315 Waiting 44986281b8b9 Waiting f3b09c502777 Waiting 1ccde423731d Pulling fs layer 408012a7b118 Waiting 9fa9226be034 Waiting 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer bf70c5107ab5 Waiting 1ccde423731d Waiting 7221d93db8a9 Waiting 7df673c7455d Waiting dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 531ee2cf3c0c Waiting 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer ed54a7dee1d8 Waiting a83b68436f09 Waiting 12c5c803443f Waiting 787d6bee9571 Waiting e27c75a98748 Waiting 13ff0988aaea Waiting e73cb4a42719 Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting 46eab5b44a35 Waiting 2d429b9e73a6 Waiting c4d302cc468d Waiting 01e0882c90d9 Waiting e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete da9db072f522 Extracting [==========> ] 786.4kB/3.624MB da9db072f522 Extracting [==========> ] 786.4kB/3.624MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB dcc0c3b2850c Downloading [========> ] 12.98MB/76.12MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB c124ba1a8b26 Downloading [===> ] 7.028MB/91.87MB da9db072f522 Pull complete da9db072f522 Pull complete dcc0c3b2850c Downloading [===================> ] 29.74MB/76.12MB c124ba1a8b26 Downloading [==========> ] 18.38MB/91.87MB dcc0c3b2850c Downloading [==============================> ] 47.04MB/76.12MB c124ba1a8b26 Downloading [=================> ] 32.98MB/91.87MB dcc0c3b2850c Downloading [==========================================> ] 64.34MB/76.12MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete c124ba1a8b26 Downloading [=========================> ] 47.58MB/91.87MB f90c8eb4724c Downloading [> ] 310.2kB/30.59MB c124ba1a8b26 Downloading [==================================> ] 62.72MB/91.87MB f90c8eb4724c Downloading [==============> ] 9.026MB/30.59MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB c124ba1a8b26 Downloading [==========================================> ] 77.86MB/91.87MB f90c8eb4724c Downloading [======================================> ] 23.66MB/30.59MB f90c8eb4724c Verifying Checksum f90c8eb4724c Download complete 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB 96e38c8865ba Downloading [======> ] 9.19MB/71.91MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 547372ea8ffa Downloading [> ] 130kB/12.63MB 2b1b549e99de Verifying Checksum 2b1b549e99de Download complete da9db072f522 Already exists f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 96e38c8865ba Downloading [================> ] 23.79MB/71.91MB 96e38c8865ba Downloading [================> ] 23.79MB/71.91MB 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 6d64908bb8c7 Pulling fs layer 739d956095f0 Pulling fs layer 6ce075c32df1 Pulling fs layer 123d8160bc76 Pulling fs layer 6ff3b4b08cc9 Pulling fs layer be48959ad93c Pulling fs layer c70684a5e2f9 Pulling fs layer 6ff3b4b08cc9 Waiting be48959ad93c Waiting c70684a5e2f9 Waiting 739d956095f0 Waiting 6d64908bb8c7 Waiting 6ce075c32df1 Waiting 123d8160bc76 Waiting 547372ea8ffa Downloading [=================> ] 4.455MB/12.63MB 96e38c8865ba Downloading [==============================> ] 43.25MB/71.91MB 96e38c8865ba Downloading [==============================> ] 43.25MB/71.91MB 65d25c0f02f3 Downloading [======> ] 3.538MB/28.98MB f90c8eb4724c Extracting [==========> ] 6.226MB/30.59MB 547372ea8ffa Downloading [==========================================> ] 10.75MB/12.63MB 547372ea8ffa Verifying Checksum 547372ea8ffa Download complete 65d25c0f02f3 Downloading [====================> ] 12.09MB/28.98MB 96e38c8865ba Downloading [=========================================> ] 59.47MB/71.91MB 96e38c8865ba Downloading [=========================================> ] 59.47MB/71.91MB 90dd78f85976 Downloading [> ] 424.9kB/41.49MB f90c8eb4724c Extracting [==============> ] 9.175MB/30.59MB 96e38c8865ba Download complete 96e38c8865ba Download complete 4f4fb700ef54 Downloading [==================================================>] 32B/32B 4f4fb700ef54 Verifying Checksum 4f4fb700ef54 Download complete 65d25c0f02f3 Downloading [===========================================> ] 25.36MB/28.98MB 90dd78f85976 Downloading [======> ] 5.537MB/41.49MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB f90c8eb4724c Extracting [======================> ] 13.76MB/30.59MB 65d25c0f02f3 Verifying Checksum 65d25c0f02f3 Download complete 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 90dd78f85976 Downloading [======================> ] 18.74MB/41.49MB f90c8eb4724c Extracting [=============================> ] 18.35MB/30.59MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Download complete 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Download complete 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete 30bb92ff0608 Downloading [==================================> ] 6.094MB/8.735MB f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 04f6155c873d Downloading [> ] 539.6kB/107.3MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 90dd78f85976 Downloading [=======================================> ] 32.37MB/41.49MB f90c8eb4724c Extracting [=====================================> ] 22.94MB/30.59MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB 96e38c8865ba Extracting [=======> ] 10.58MB/71.91MB f18232174bc9 Pull complete 9183b65e90ee Extracting [==================================================>] 141B/141B 9183b65e90ee Extracting [==================================================>] 141B/141B f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB 96e38c8865ba Extracting [==========> ] 15.04MB/71.91MB f90c8eb4724c Extracting [================================================> ] 29.49MB/30.59MB 9183b65e90ee Pull complete 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 96e38c8865ba Extracting [=============> ] 19.5MB/71.91MB 96e38c8865ba Extracting [=============> ] 19.5MB/71.91MB 3f8d5c908dcc Extracting [===========> ] 786.4kB/3.524MB f90c8eb4724c Pull complete 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 2b1b549e99de Extracting [=========> ] 524.3kB/2.646MB 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 3f8d5c908dcc Pull complete 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 2b1b549e99de Pull complete 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 96e38c8865ba Extracting [=======================> ] 33.42MB/71.91MB 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 30bb92ff0608 Extracting [============> ] 2.163MB/8.735MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 547372ea8ffa Extracting [=========> ] 2.359MB/12.63MB 30bb92ff0608 Extracting [====================================> ] 6.291MB/8.735MB 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 547372ea8ffa Extracting [================================> ] 8.126MB/12.63MB 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 96e38c8865ba Extracting [================================> ] 46.79MB/71.91MB 547372ea8ffa Extracting [=================================================> ] 12.45MB/12.63MB 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 96e38c8865ba Extracting [===================================> ] 51.25MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.26MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 96e38c8865ba Extracting [===========================================> ] 62.39MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 96e38c8865ba Extracting [==============================================> ] 66.29MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 71.86MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 547372ea8ffa Pull complete 807a2e881ecd Pull complete 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 65d25c0f02f3 Extracting [==========> ] 6.193MB/28.98MB e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 4a4d0948b0bf Pull complete 5e06c6bed798 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 65d25c0f02f3 Extracting [===================> ] 11.5MB/28.98MB 1ec5fb03eaee Pull complete d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 684be6598fc9 Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 65d25c0f02f3 Extracting [============================> ] 16.52MB/28.98MB d3165a332ae3 Pull complete 0d92cad902ba Pull complete 65d25c0f02f3 Extracting [===========================================> ] 25.07MB/28.98MB 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 65d25c0f02f3 Pull complete dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB c124ba1a8b26 Extracting [======> ] 11.14MB/91.87MB dcc0c3b2850c Extracting [===========> ] 17.27MB/76.12MB c124ba1a8b26 Extracting [============> ] 23.4MB/91.87MB dcc0c3b2850c Extracting [======================> ] 33.98MB/76.12MB c124ba1a8b26 Extracting [==================> ] 33.98MB/91.87MB dcc0c3b2850c Extracting [==================================> ] 52.36MB/76.12MB c124ba1a8b26 Extracting [========================> ] 44.56MB/91.87MB dcc0c3b2850c Extracting [==============================================> ] 70.75MB/76.12MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB c124ba1a8b26 Extracting [=================================> ] 61.28MB/91.87MB eb7cda286a15 Pull complete c124ba1a8b26 Extracting [==========================================> ] 77.99MB/91.87MB api Pulled c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 90dd78f85976 Downloading [===========================================> ] 36.21MB/41.49MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 04f6155c873d Downloading [> ] 1.621MB/107.3MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 90dd78f85976 Verifying Checksum 90dd78f85976 Download complete 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Verifying Checksum 538deb30e80c Download complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 04f6155c873d Downloading [=====> ] 11.89MB/107.3MB 90dd78f85976 Extracting [> ] 426kB/41.49MB 85dde7dceb0a Downloading [========> ] 10.27MB/63.48MB 6394804c2196 Pull complete pap Pulled 04f6155c873d Downloading [============> ] 27.57MB/107.3MB 1e017ebebdbd Downloading [====> ] 3.014MB/37.19MB 90dd78f85976 Extracting [======> ] 5.538MB/41.49MB 85dde7dceb0a Downloading [===================> ] 24.33MB/63.48MB 04f6155c873d Downloading [====================> ] 44.87MB/107.3MB 1e017ebebdbd Downloading [========> ] 6.028MB/37.19MB 90dd78f85976 Extracting [============> ] 10.22MB/41.49MB 85dde7dceb0a Downloading [============================> ] 35.68MB/63.48MB 04f6155c873d Downloading [=============================> ] 62.72MB/107.3MB 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 90dd78f85976 Extracting [================> ] 14.06MB/41.49MB 85dde7dceb0a Downloading [====================================> ] 46.5MB/63.48MB 04f6155c873d Downloading [=====================================> ] 81.1MB/107.3MB 1e017ebebdbd Downloading [==================> ] 13.94MB/37.19MB 85dde7dceb0a Downloading [================================================> ] 61.64MB/63.48MB 90dd78f85976 Extracting [======================> ] 18.74MB/41.49MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 04f6155c873d Downloading [==============================================> ] 99.48MB/107.3MB 1e017ebebdbd Downloading [==================================> ] 25.62MB/37.19MB 90dd78f85976 Extracting [===================================> ] 29.82MB/41.49MB 04f6155c873d Verifying Checksum 04f6155c873d Download complete 82bfc142787e Downloading [> ] 97.22kB/8.613MB 55f2b468da67 Downloading [=> ] 5.406MB/257.9MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete 90dd78f85976 Extracting [==========================================> ] 34.93MB/41.49MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 04f6155c873d Extracting [> ] 557.1kB/107.3MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 82bfc142787e Downloading [===============================> ] 5.504MB/8.613MB 55f2b468da67 Downloading [===> ] 16.76MB/257.9MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Download complete 90dd78f85976 Extracting [=============================================> ] 37.91MB/41.49MB 5cfb27c10ea5 Download complete b0e0ef7895f4 Downloading [=======> ] 5.651MB/37.01MB 04f6155c873d Extracting [=> ] 3.899MB/107.3MB 40a5eed61bb0 Download complete 1e017ebebdbd Extracting [========> ] 6.291MB/37.19MB 55f2b468da67 Downloading [======> ] 31.36MB/257.9MB e040ea11fa10 Download complete 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB b0e0ef7895f4 Downloading [===============> ] 11.68MB/37.01MB 04f6155c873d Extracting [===> ] 7.799MB/107.3MB 1e017ebebdbd Extracting [=============> ] 9.83MB/37.19MB 55f2b468da67 Downloading [=========> ] 46.5MB/257.9MB 90dd78f85976 Pull complete 4f4fb700ef54 Extracting [==================================================>] 32B/32B 4f4fb700ef54 Extracting [==================================================>] 32B/32B 09d5a3f70313 Downloading [=> ] 3.243MB/109.2MB b0e0ef7895f4 Downloading [========================> ] 18.09MB/37.01MB 1e017ebebdbd Extracting [===================> ] 14.55MB/37.19MB 04f6155c873d Extracting [=====> ] 12.26MB/107.3MB 55f2b468da67 Downloading [============> ] 62.72MB/257.9MB b0e0ef7895f4 Downloading [=================================> ] 24.49MB/37.01MB 09d5a3f70313 Downloading [==> ] 5.946MB/109.2MB 4f4fb700ef54 Pull complete opa-pdp Pulled 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 55f2b468da67 Downloading [===============> ] 78.94MB/257.9MB 04f6155c873d Extracting [=======> ] 15.6MB/107.3MB b0e0ef7895f4 Downloading [===========================================> ] 32.03MB/37.01MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB 55f2b468da67 Downloading [==================> ] 96.78MB/257.9MB 04f6155c873d Extracting [========> ] 17.27MB/107.3MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 09d5a3f70313 Downloading [=====> ] 11.35MB/109.2MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 55f2b468da67 Downloading [=====================> ] 112.5MB/257.9MB 1e017ebebdbd Extracting [====================================> ] 27.13MB/37.19MB 04f6155c873d Extracting [========> ] 18.94MB/107.3MB eca0188f477e Downloading [======> ] 4.521MB/37.17MB 09d5a3f70313 Downloading [======> ] 14.6MB/109.2MB 55f2b468da67 Downloading [=========================> ] 129.8MB/257.9MB 04f6155c873d Extracting [==========> ] 23.4MB/107.3MB 1e017ebebdbd Extracting [=============================================> ] 33.82MB/37.19MB eca0188f477e Downloading [===========> ] 8.666MB/37.17MB 09d5a3f70313 Downloading [========> ] 17.84MB/109.2MB 55f2b468da67 Downloading [============================> ] 148.1MB/257.9MB 04f6155c873d Extracting [=============> ] 28.41MB/107.3MB 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB eca0188f477e Downloading [=========================> ] 19.22MB/37.17MB 09d5a3f70313 Downloading [==============> ] 31.36MB/109.2MB 55f2b468da67 Downloading [================================> ] 165.4MB/257.9MB 1e017ebebdbd Pull complete 04f6155c873d Extracting [================> ] 36.21MB/107.3MB eca0188f477e Downloading [============================================> ] 33.16MB/37.17MB 09d5a3f70313 Downloading [=====================> ] 45.96MB/109.2MB 55f2b468da67 Downloading [===================================> ] 183.8MB/257.9MB eca0188f477e Verifying Checksum eca0188f477e Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Download complete eabd8714fec9 Downloading [> ] 539.6kB/375MB 04f6155c873d Extracting [==================> ] 40.11MB/107.3MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 09d5a3f70313 Downloading [===========================> ] 61.09MB/109.2MB 55f2b468da67 Downloading [======================================> ] 200MB/257.9MB eabd8714fec9 Downloading [> ] 6.487MB/375MB 04f6155c873d Extracting [====================> ] 44.56MB/107.3MB 09d5a3f70313 Downloading [==================================> ] 75.69MB/109.2MB 55f2b468da67 Downloading [=========================================> ] 216.3MB/257.9MB eca0188f477e Extracting [=======> ] 5.898MB/37.17MB eabd8714fec9 Downloading [==> ] 16.76MB/375MB 04f6155c873d Extracting [======================> ] 48.46MB/107.3MB 09d5a3f70313 Downloading [==========================================> ] 91.91MB/109.2MB 55f2b468da67 Downloading [============================================> ] 229.8MB/257.9MB eca0188f477e Extracting [=============> ] 9.83MB/37.17MB eabd8714fec9 Downloading [===> ] 29.2MB/375MB 04f6155c873d Extracting [========================> ] 53.48MB/107.3MB 09d5a3f70313 Downloading [=================================================> ] 108.1MB/109.2MB 55f2b468da67 Downloading [==============================================> ] 242.2MB/257.9MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete eca0188f477e Extracting [===================> ] 14.55MB/37.17MB 04f6155c873d Extracting [============================> ] 60.72MB/107.3MB eabd8714fec9 Downloading [====> ] 31.9MB/375MB 55f2b468da67 Downloading [===============================================> ] 246MB/257.9MB 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eca0188f477e Extracting [==========================> ] 20.05MB/37.17MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 04f6155c873d Extracting [==============================> ] 64.62MB/107.3MB eabd8714fec9 Downloading [=====> ] 41.09MB/375MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Download complete eca0188f477e Extracting [===================================> ] 26.35MB/37.17MB 8f10199ed94b Downloading [======> ] 1.08MB/8.768MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 04f6155c873d Extracting [===============================> ] 68.52MB/107.3MB eabd8714fec9 Downloading [=======> ] 54.61MB/375MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 8f10199ed94b Downloading [================> ] 2.85MB/8.768MB f3a82e9f1761 Downloading [==> ] 1.834MB/44.41MB eabd8714fec9 Downloading [=========> ] 69.75MB/375MB 04f6155c873d Extracting [=================================> ] 72.42MB/107.3MB 55f2b468da67 Extracting [==> ] 13.93MB/257.9MB 8f10199ed94b Downloading [==========================> ] 4.718MB/8.768MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB f3a82e9f1761 Downloading [===> ] 3.21MB/44.41MB eabd8714fec9 Downloading [===========> ] 84.34MB/375MB 04f6155c873d Extracting [===================================> ] 75.76MB/107.3MB 55f2b468da67 Extracting [====> ] 21.17MB/257.9MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 8f10199ed94b Downloading [======================================> ] 6.684MB/8.768MB eabd8714fec9 Downloading [=============> ] 98.94MB/375MB f3a82e9f1761 Downloading [=====> ] 5.045MB/44.41MB eca0188f477e Pull complete 04f6155c873d Extracting [====================================> ] 79.1MB/107.3MB e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B 8f10199ed94b Downloading [================================================> ] 8.551MB/8.768MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Download complete 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete eabd8714fec9 Downloading [===============> ] 116.2MB/375MB 04f6155c873d Extracting [======================================> ] 82.44MB/107.3MB f3a82e9f1761 Downloading [========> ] 7.798MB/44.41MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 55f2b468da67 Extracting [=======> ] 36.21MB/257.9MB e444bcd4d577 Pull complete 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eabd8714fec9 Downloading [=================> ] 132.5MB/375MB 04f6155c873d Extracting [========================================> ] 87.46MB/107.3MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB f3a82e9f1761 Downloading [===========> ] 10.55MB/44.41MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete 55f2b468da67 Extracting [=========> ] 49.02MB/257.9MB eabd8714fec9 Downloading [===================> ] 149.2MB/375MB 04f6155c873d Extracting [============================================> ] 94.7MB/107.3MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB f3a82e9f1761 Downloading [==============> ] 13.3MB/44.41MB 55f2b468da67 Extracting [===========> ] 60.16MB/257.9MB eabd8714fec9 Downloading [======================> ] 165.4MB/375MB 04f6155c873d Extracting [===============================================> ] 101.4MB/107.3MB da3ed5db7103 Downloading [=> ] 2.702MB/127.4MB f3a82e9f1761 Downloading [===================> ] 17.43MB/44.41MB 55f2b468da67 Extracting [=============> ] 70.75MB/257.9MB eabd8714fec9 Downloading [========================> ] 182.7MB/375MB 04f6155c873d Extracting [================================================> ] 103.6MB/107.3MB da3ed5db7103 Downloading [====> ] 10.81MB/127.4MB f3a82e9f1761 Downloading [================================> ] 28.9MB/44.41MB 55f2b468da67 Extracting [===============> ] 80.22MB/257.9MB eabd8714fec9 Downloading [==========================> ] 201.1MB/375MB 04f6155c873d Extracting [================================================> ] 104.7MB/107.3MB da3ed5db7103 Downloading [=======> ] 19.46MB/127.4MB f3a82e9f1761 Downloading [=================================================> ] 44.04MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete 55f2b468da67 Extracting [=================> ] 92.47MB/257.9MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB eabd8714fec9 Downloading [============================> ] 217.3MB/375MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB da3ed5db7103 Downloading [===========> ] 29.74MB/127.4MB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 04f6155c873d Pull complete 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 55f2b468da67 Extracting [===================> ] 101.9MB/257.9MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB eabd8714fec9 Downloading [===============================> ] 235.2MB/375MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB da3ed5db7103 Downloading [================> ] 41.63MB/127.4MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 55f2b468da67 Extracting [====================> ] 107.5MB/257.9MB eabd8714fec9 Downloading [=================================> ] 253MB/375MB 6ac0e4adf315 Downloading [==> ] 3.243MB/62.07MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB da3ed5db7103 Downloading [=====================> ] 54.07MB/127.4MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 55f2b468da67 Extracting [=====================> ] 110.3MB/257.9MB eabd8714fec9 Downloading [===================================> ] 266.6MB/375MB 6ac0e4adf315 Downloading [=======> ] 9.19MB/62.07MB da3ed5db7103 Downloading [=========================> ] 64.34MB/127.4MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB eabd8714fec9 Downloading [=====================================> ] 284.4MB/375MB 6ac0e4adf315 Downloading [==============> ] 17.84MB/62.07MB da3ed5db7103 Downloading [==============================> ] 78.94MB/127.4MB eabd8714fec9 Downloading [========================================> ] 302.2MB/375MB 55f2b468da67 Extracting [=======================> ] 118.7MB/257.9MB 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB 6ac0e4adf315 Downloading [=======================> ] 29.74MB/62.07MB 1617e25568b2 Pull complete da3ed5db7103 Downloading [====================================> ] 93.54MB/127.4MB eabd8714fec9 Downloading [==========================================> ] 318.5MB/375MB 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB 6ac0e4adf315 Downloading [=====================================> ] 45.96MB/62.07MB da3ed5db7103 Downloading [==========================================> ] 107.1MB/127.4MB 85dde7dceb0a Extracting [===> ] 3.899MB/63.48MB eabd8714fec9 Downloading [============================================> ] 334.7MB/375MB 55f2b468da67 Extracting [========================> ] 127.6MB/257.9MB 6ac0e4adf315 Downloading [================================================> ] 60.55MB/62.07MB da3ed5db7103 Downloading [===============================================> ] 121.7MB/127.4MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete eabd8714fec9 Downloading [==============================================> ] 351.4MB/375MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 55f2b468da67 Extracting [=========================> ] 130.9MB/257.9MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB bf70c5107ab5 Downloading [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Download complete 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 7df673c7455d Verifying Checksum 7df673c7455d Download complete f3b09c502777 Downloading [=========> ] 10.27MB/56.52MB eabd8714fec9 Downloading [================================================> ] 366MB/375MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete f3b09c502777 Downloading [==============> ] 16.22MB/56.52MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 2d429b9e73a6 Downloading [====> ] 2.653MB/29.13MB 55f2b468da67 Extracting [===========================> ] 139.8MB/257.9MB 6ac0e4adf315 Extracting [=====> ] 6.685MB/62.07MB 85dde7dceb0a Extracting [=======> ] 10.03MB/63.48MB c4d302cc468d Downloading [==============================> ] 2.751MB/4.534MB f3b09c502777 Downloading [===========================> ] 30.82MB/56.52MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 2d429b9e73a6 Downloading [=============> ] 7.667MB/29.13MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 55f2b468da67 Extracting [===========================> ] 142.6MB/257.9MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 6ac0e4adf315 Extracting [=======> ] 8.913MB/62.07MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete 85dde7dceb0a Extracting [=========> ] 12.26MB/63.48MB f3b09c502777 Downloading [=======================================> ] 44.87MB/56.52MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB eabd8714fec9 Extracting [=> ] 10.58MB/375MB 2d429b9e73a6 Downloading [================================> ] 18.87MB/29.13MB 55f2b468da67 Extracting [============================> ] 144.8MB/257.9MB 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB 2d429b9e73a6 Verifying Checksum 85dde7dceb0a Extracting [===========> ] 14.48MB/63.48MB 2d429b9e73a6 Download complete f3b09c502777 Verifying Checksum f3b09c502777 Download complete 531ee2cf3c0c Downloading [==========================================> ] 6.88MB/8.066MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete eabd8714fec9 Extracting [==> ] 20.05MB/375MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete 787d6bee9571 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 55f2b468da67 Extracting [============================> ] 148.2MB/257.9MB 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB eabd8714fec9 Extracting [===> ] 22.84MB/375MB e73cb4a42719 Downloading [===> ] 8.65MB/109.1MB 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB 6ac0e4adf315 Extracting [=============> ] 16.15MB/62.07MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB 2d429b9e73a6 Extracting [=====> ] 2.949MB/29.13MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB e73cb4a42719 Downloading [========> ] 17.84MB/109.1MB 55f2b468da67 Extracting [=============================> ] 153.2MB/257.9MB 6ac0e4adf315 Extracting [===============> ] 18.94MB/62.07MB 2d429b9e73a6 Extracting [=========> ] 5.603MB/29.13MB e73cb4a42719 Downloading [=============> ] 30.28MB/109.1MB eabd8714fec9 Extracting [====> ] 30.64MB/375MB 85dde7dceb0a Extracting [==============> ] 18.94MB/63.48MB 55f2b468da67 Extracting [==============================> ] 155.4MB/257.9MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 739d956095f0 Downloading [> ] 146.4kB/14.64MB e73cb4a42719 Downloading [====================> ] 45.42MB/109.1MB eabd8714fec9 Extracting [=====> ] 38.44MB/375MB 6d64908bb8c7 Downloading [> ] 539.6kB/71.86MB 85dde7dceb0a Extracting [=================> ] 21.73MB/63.48MB 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 2d429b9e73a6 Extracting [==================> ] 10.91MB/29.13MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB e73cb4a42719 Downloading [==========================> ] 56.77MB/109.1MB 739d956095f0 Downloading [==========================> ] 7.667MB/14.64MB eabd8714fec9 Extracting [======> ] 48.46MB/375MB 6d64908bb8c7 Downloading [======> ] 8.65MB/71.86MB 85dde7dceb0a Extracting [==================> ] 23.4MB/63.48MB 55f2b468da67 Extracting [===============================> ] 161MB/257.9MB 739d956095f0 Verifying Checksum 739d956095f0 Download complete 2d429b9e73a6 Extracting [======================> ] 12.98MB/29.13MB 6ac0e4adf315 Extracting [=====================> ] 27.3MB/62.07MB e73cb4a42719 Downloading [===============================> ] 69.75MB/109.1MB eabd8714fec9 Extracting [=======> ] 58.49MB/375MB 6d64908bb8c7 Downloading [=============> ] 20MB/71.86MB 85dde7dceb0a Extracting [====================> ] 26.18MB/63.48MB 55f2b468da67 Extracting [===============================> ] 163.8MB/257.9MB 2d429b9e73a6 Extracting [===========================> ] 16.22MB/29.13MB 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB e73cb4a42719 Downloading [=====================================> ] 82.72MB/109.1MB eabd8714fec9 Extracting [========> ] 64.06MB/375MB 6d64908bb8c7 Downloading [=====================> ] 31.36MB/71.86MB 6ce075c32df1 Downloading [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Verifying Checksum 6ce075c32df1 Download complete 85dde7dceb0a Extracting [=====================> ] 27.85MB/63.48MB 55f2b468da67 Extracting [================================> ] 166.6MB/257.9MB 2d429b9e73a6 Extracting [==================================> ] 20.35MB/29.13MB 6ac0e4adf315 Extracting [=============================> ] 36.77MB/62.07MB e73cb4a42719 Downloading [===========================================> ] 95.16MB/109.1MB eabd8714fec9 Extracting [=========> ] 72.97MB/375MB 6d64908bb8c7 Downloading [===============================> ] 45.96MB/71.86MB 55f2b468da67 Extracting [================================> ] 169.3MB/257.9MB 85dde7dceb0a Extracting [========================> ] 30.64MB/63.48MB 123d8160bc76 Downloading [============================> ] 3.003kB/5.239kB 123d8160bc76 Download complete 2d429b9e73a6 Extracting [=========================================> ] 24.18MB/29.13MB 6ac0e4adf315 Extracting [=====================================> ] 46.24MB/62.07MB e73cb4a42719 Downloading [=================================================> ] 107.6MB/109.1MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete eabd8714fec9 Extracting [===========> ] 83MB/375MB 6d64908bb8c7 Downloading [=======================================> ] 57.31MB/71.86MB 85dde7dceb0a Extracting [=========================> ] 31.75MB/63.48MB 55f2b468da67 Extracting [=================================> ] 171MB/257.9MB 6ac0e4adf315 Extracting [=============================================> ] 56.82MB/62.07MB 6d64908bb8c7 Verifying Checksum 6d64908bb8c7 Download complete eabd8714fec9 Extracting [===========> ] 87.46MB/375MB 6ff3b4b08cc9 Downloading [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Verifying Checksum 6ff3b4b08cc9 Download complete be48959ad93c Downloading [==================================================>] 1.033kB/1.033kB be48959ad93c Verifying Checksum be48959ad93c Download complete 6ac0e4adf315 Extracting [================================================> ] 59.6MB/62.07MB 55f2b468da67 Extracting [=================================> ] 171.6MB/257.9MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 85dde7dceb0a Extracting [==========================> ] 33.42MB/63.48MB eabd8714fec9 Extracting [============> ] 93.59MB/375MB 2d429b9e73a6 Extracting [==============================================> ] 27.13MB/29.13MB 85dde7dceb0a Extracting [===========================> ] 35.09MB/63.48MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB c70684a5e2f9 Downloading [=======> ] 3.002kB/19.52kB c70684a5e2f9 Downloading [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Verifying Checksum c70684a5e2f9 Download complete 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 6d64908bb8c7 Extracting [> ] 557.1kB/71.86MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 85dde7dceb0a Extracting [============================> ] 36.21MB/63.48MB 6d64908bb8c7 Extracting [==> ] 3.899MB/71.86MB eabd8714fec9 Extracting [=============> ] 103.6MB/375MB 85dde7dceb0a Extracting [=============================> ] 37.32MB/63.48MB 6d64908bb8c7 Extracting [===> ] 4.456MB/71.86MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [==============> ] 105.3MB/375MB 85dde7dceb0a Extracting [==============================> ] 38.44MB/63.48MB 6ac0e4adf315 Pull complete 6d64908bb8c7 Extracting [=====> ] 7.242MB/71.86MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 85dde7dceb0a Extracting [===============================> ] 40.11MB/63.48MB eabd8714fec9 Extracting [==============> ] 108.6MB/375MB 55f2b468da67 Extracting [=================================> ] 174.9MB/257.9MB 6d64908bb8c7 Extracting [======> ] 10.03MB/71.86MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eabd8714fec9 Extracting [==============> ] 112MB/375MB 85dde7dceb0a Extracting [=================================> ] 42.34MB/63.48MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 6d64908bb8c7 Extracting [========> ] 12.81MB/71.86MB eabd8714fec9 Extracting [===============> ] 115.9MB/375MB 85dde7dceb0a Extracting [===================================> ] 45.12MB/63.48MB f3b09c502777 Extracting [===> ] 3.899MB/56.52MB 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB 6d64908bb8c7 Extracting [==========> ] 15.6MB/71.86MB eabd8714fec9 Extracting [===============> ] 119.2MB/375MB 85dde7dceb0a Extracting [=====================================> ] 47.91MB/63.48MB 55f2b468da67 Extracting [===================================> ] 182.7MB/257.9MB f3b09c502777 Extracting [======> ] 7.242MB/56.52MB 6d64908bb8c7 Extracting [=============> ] 18.94MB/71.86MB eabd8714fec9 Extracting [================> ] 123.1MB/375MB 55f2b468da67 Extracting [====================================> ] 187.7MB/257.9MB f3b09c502777 Extracting [========> ] 10.03MB/56.52MB 85dde7dceb0a Extracting [=======================================> ] 50.69MB/63.48MB 6d64908bb8c7 Extracting [===============> ] 22.84MB/71.86MB eabd8714fec9 Extracting [================> ] 125.9MB/375MB 55f2b468da67 Extracting [=====================================> ] 191.6MB/257.9MB f3b09c502777 Extracting [===========> ] 13.37MB/56.52MB 85dde7dceb0a Extracting [==========================================> ] 54.03MB/63.48MB 6d64908bb8c7 Extracting [==================> ] 26.18MB/71.86MB 85dde7dceb0a Extracting [==========================================> ] 54.59MB/63.48MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB 6d64908bb8c7 Extracting [==================> ] 26.74MB/71.86MB 85dde7dceb0a Extracting [===========================================> ] 55.15MB/63.48MB f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 6d64908bb8c7 Extracting [==================> ] 27.3MB/71.86MB eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 55f2b468da67 Extracting [=====================================> ] 193.9MB/257.9MB eabd8714fec9 Extracting [=================> ] 131.5MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB f3b09c502777 Extracting [===============> ] 17.27MB/56.52MB 6d64908bb8c7 Extracting [=====================> ] 30.64MB/71.86MB eabd8714fec9 Extracting [=================> ] 134.3MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB 6d64908bb8c7 Extracting [=======================> ] 33.42MB/71.86MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [==================> ] 136.5MB/375MB 85dde7dceb0a Extracting [=================================================> ] 62.95MB/63.48MB f3b09c502777 Extracting [==================> ] 21.17MB/56.52MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 6d64908bb8c7 Extracting [=========================> ] 36.77MB/71.86MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [==================> ] 138.7MB/375MB f3b09c502777 Extracting [====================> ] 23.4MB/56.52MB 6d64908bb8c7 Extracting [===========================> ] 38.99MB/71.86MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 2d429b9e73a6 Pull complete 55f2b468da67 Extracting [======================================> ] 200.5MB/257.9MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB eabd8714fec9 Extracting [==================> ] 139.3MB/375MB 6d64908bb8c7 Extracting [=============================> ] 42.34MB/71.86MB f3b09c502777 Extracting [=========================> ] 28.97MB/56.52MB eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 6d64908bb8c7 Extracting [===============================> ] 45.68MB/71.86MB 55f2b468da67 Extracting [=======================================> ] 203.3MB/257.9MB f3b09c502777 Extracting [=====================================> ] 42.89MB/56.52MB eabd8714fec9 Extracting [===================> ] 146.5MB/375MB 6d64908bb8c7 Extracting [==================================> ] 49.02MB/71.86MB 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB f3b09c502777 Extracting [==============================================> ] 52.92MB/56.52MB eabd8714fec9 Extracting [===================> ] 149.8MB/375MB 6d64908bb8c7 Extracting [====================================> ] 52.36MB/71.86MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB f3b09c502777 Extracting [=================================================> ] 56.26MB/56.52MB eabd8714fec9 Extracting [====================> ] 153.2MB/375MB 6d64908bb8c7 Extracting [=======================================> ] 56.82MB/71.86MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 6d64908bb8c7 Extracting [=========================================> ] 59.05MB/71.86MB eabd8714fec9 Extracting [====================> ] 154.9MB/375MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB 6d64908bb8c7 Extracting [===========================================> ] 61.83MB/71.86MB 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB 6d64908bb8c7 Extracting [===========================================> ] 62.39MB/71.86MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB eabd8714fec9 Extracting [=====================> ] 157.6MB/375MB 6d64908bb8c7 Extracting [================================================> ] 69.07MB/71.86MB eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB eabd8714fec9 Extracting [======================> ] 166MB/375MB eabd8714fec9 Extracting [=======================> ] 178.3MB/375MB eabd8714fec9 Extracting [==========================> ] 195.5MB/375MB eabd8714fec9 Extracting [============================> ] 210.6MB/375MB 85dde7dceb0a Pull complete 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB eabd8714fec9 Extracting [=============================> ] 218.4MB/375MB f3b09c502777 Pull complete 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB eabd8714fec9 Extracting [=============================> ] 224.5MB/375MB 55f2b468da67 Extracting [==========================================> ] 219.5MB/257.9MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB eabd8714fec9 Extracting [==============================> ] 228.4MB/375MB eabd8714fec9 Extracting [===============================> ] 234MB/375MB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB eabd8714fec9 Extracting [===============================> ] 237.3MB/375MB 55f2b468da67 Extracting [============================================> ] 229.5MB/257.9MB eabd8714fec9 Extracting [================================> ] 242.3MB/375MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB eabd8714fec9 Extracting [=================================> ] 247.9MB/375MB 55f2b468da67 Extracting [=============================================> ] 235.1MB/257.9MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Extracting [=================================> ] 252.3MB/375MB 55f2b468da67 Extracting [==============================================> ] 237.3MB/257.9MB eabd8714fec9 Extracting [==================================> ] 258.5MB/375MB 55f2b468da67 Extracting [==============================================> ] 241.8MB/257.9MB eabd8714fec9 Extracting [===================================> ] 265.7MB/375MB eabd8714fec9 Extracting [===================================> ] 266.8MB/375MB eabd8714fec9 Extracting [===================================> ] 267.9MB/375MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 46eab5b44a35 Pull complete 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB 55f2b468da67 Extracting [=================================================> ] 254.6MB/257.9MB eabd8714fec9 Extracting [====================================> ] 270.2MB/375MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB eabd8714fec9 Extracting [=====================================> ] 278MB/375MB eabd8714fec9 Extracting [=====================================> ] 283.5MB/375MB eabd8714fec9 Extracting [======================================> ] 290.8MB/375MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 6d64908bb8c7 Pull complete 7009d5001b77 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB c4d302cc468d Extracting [=============================> ] 2.687MB/4.534MB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB eabd8714fec9 Extracting [=========================================> ] 313.1MB/375MB eabd8714fec9 Extracting [==========================================> ] 315.9MB/375MB eabd8714fec9 Extracting [==========================================> ] 320.3MB/375MB eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB eabd8714fec9 Extracting [============================================> ] 330.3MB/375MB eabd8714fec9 Extracting [============================================> ] 332MB/375MB eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB eabd8714fec9 Extracting [=============================================> ] 340.4MB/375MB 408012a7b118 Pull complete eabd8714fec9 Extracting [=============================================> ] 341.5MB/375MB 739d956095f0 Extracting [> ] 163.8kB/14.64MB 739d956095f0 Extracting [=> ] 327.7kB/14.64MB 739d956095f0 Extracting [=> ] 491.5kB/14.64MB 55f2b468da67 Pull complete eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 739d956095f0 Extracting [=================> ] 5.079MB/14.64MB 739d956095f0 Extracting [===========================> ] 8.192MB/14.64MB 739d956095f0 Extracting [============================> ] 8.356MB/14.64MB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 739d956095f0 Extracting [=============================> ] 8.52MB/14.64MB c4d302cc468d Pull complete 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 739d956095f0 Extracting [=====================================> ] 10.98MB/14.64MB 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 538deb30e80c Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB grafana Pulled 01e0882c90d9 Extracting [====================> ] 589.8kB/1.447MB 82bfc142787e Extracting [=============> ] 2.261MB/8.613MB 739d956095f0 Extracting [=========================================> ] 12.12MB/14.64MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB 739d956095f0 Extracting [==================================================>] 14.64MB/14.64MB 82bfc142787e Extracting [==============================================> ] 7.963MB/8.613MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 01e0882c90d9 Pull complete eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 82bfc142787e Pull complete 739d956095f0 Pull complete eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB eabd8714fec9 Extracting [===============================================> ] 357.6MB/375MB 531ee2cf3c0c Extracting [===> ] 589.8kB/8.066MB eabd8714fec9 Extracting [================================================> ] 367.1MB/375MB 531ee2cf3c0c Extracting [=============================> ] 4.817MB/8.066MB eabd8714fec9 Extracting [=================================================> ] 371.6MB/375MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB bf70c5107ab5 Pull complete 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 6ce075c32df1 Pull complete 531ee2cf3c0c Pull complete eabd8714fec9 Pull complete ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 46baca71a4ef Pull complete 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 1ccde423731d Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 123d8160bc76 Pull complete 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 45fd2fec8a19 Pull complete b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B ed54a7dee1d8 Pull complete 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B b0e0ef7895f4 Extracting [==========> ] 7.864MB/37.01MB 6ff3b4b08cc9 Pull complete 8f10199ed94b Extracting [=======================> ] 4.129MB/8.768MB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB b0e0ef7895f4 Extracting [===============> ] 11.4MB/37.01MB 8f10199ed94b Extracting [================================================> ] 8.454MB/8.768MB 12c5c803443f Pull complete 7df673c7455d Pull complete e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB prometheus Pulled be48959ad93c Pull complete 8f10199ed94b Pull complete c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB b0e0ef7895f4 Extracting [==============================> ] 22.81MB/37.01MB e27c75a98748 Pull complete f963a77d2726 Pull complete b0e0ef7895f4 Extracting [============================================> ] 32.64MB/37.01MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB c70684a5e2f9 Pull complete b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB policy-db-migrator Pulled f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB f3a82e9f1761 Extracting [===========> ] 10.09MB/44.41MB e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B f3a82e9f1761 Extracting [============================> ] 25.69MB/44.41MB e73cb4a42719 Extracting [======> ] 13.93MB/109.1MB f3a82e9f1761 Extracting [===========================================> ] 38.99MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 40a5eed61bb0 Pull complete e73cb4a42719 Extracting [==========> ] 23.95MB/109.1MB e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete e73cb4a42719 Extracting [============> ] 27.85MB/109.1MB 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B e040ea11fa10 Pull complete 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B e73cb4a42719 Extracting [================> ] 35.09MB/109.1MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB e73cb4a42719 Extracting [===================> ] 41.78MB/109.1MB 09d5a3f70313 Extracting [======> ] 13.37MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09d5a3f70313 Extracting [=============> ] 28.97MB/109.2MB e73cb4a42719 Extracting [======================> ] 49.02MB/109.1MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 09d5a3f70313 Extracting [===================> ] 43.45MB/109.2MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 09d5a3f70313 Extracting [=========================> ] 55.15MB/109.2MB 71a9f6a9ab4d Pull complete e73cb4a42719 Extracting [=========================> ] 55.71MB/109.1MB 09d5a3f70313 Extracting [================================> ] 71.86MB/109.2MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB 09d5a3f70313 Extracting [======================================> ] 84.67MB/109.2MB da3ed5db7103 Extracting [====> ] 11.7MB/127.4MB e73cb4a42719 Extracting [==============================> ] 66.29MB/109.1MB 09d5a3f70313 Extracting [============================================> ] 98.04MB/109.2MB da3ed5db7103 Extracting [=========> ] 23.4MB/127.4MB e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 09d5a3f70313 Extracting [================================================> ] 105.3MB/109.2MB da3ed5db7103 Extracting [=============> ] 34.54MB/127.4MB e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB da3ed5db7103 Extracting [==================> ] 47.35MB/127.4MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB da3ed5db7103 Extracting [==========================> ] 67.4MB/127.4MB 356f5c2c843b Pull complete kafka Pulled e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB da3ed5db7103 Extracting [=================================> ] 84.67MB/127.4MB e73cb4a42719 Extracting [===========================================> ] 94.7MB/109.1MB da3ed5db7103 Extracting [=======================================> ] 100.8MB/127.4MB e73cb4a42719 Extracting [============================================> ] 97.48MB/109.1MB da3ed5db7103 Extracting [============================================> ] 113.6MB/127.4MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB da3ed5db7103 Extracting [===============================================> ] 120.9MB/127.4MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB da3ed5db7103 Extracting [=================================================> ] 125.9MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB e73cb4a42719 Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Pull complete postgres Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container zookeeper Created Container kafka Creating Container prometheus Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-opa-pdp Creating Container policy-opa-pdp Created Container postgres Starting Container prometheus Starting Container zookeeper Starting Container zookeeper Started Container kafka Starting Container kafka Started Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-opa-pdp Starting Container policy-opa-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 3 minutes for OPA-PDP to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Checking if REST port 30012 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:a85019bc7bd1f2e9c9a8fa608604808d26dd4dff04ae4e7d2e41d1835c3c3d3e top - 11:50:58 up 6 min, 0 users, load average: 1.27, 1.25, 0.65 Tasks: 217 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.6 us, 2.3 sy, 0.0 ni, 84.2 id, 3.9 wa, 0.0 hi, 0.1 si, 0.0 st total used free shared buff/cache available Mem: 31G 2.4G 21G 28M 7.3G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS a8ff50fc2d9e policy-opa-pdp 0.14% 12.09MiB / 31.41GiB 0.04% 81.6kB / 76.7kB 0B / 0B 21 1d7066777a44 policy-pap 0.61% 479.9MiB / 31.41GiB 1.49% 2.21MB / 1.23MB 0B / 139MB 69 c3af476decad policy-api 0.11% 432.3MiB / 31.41GiB 1.34% 1.15MB / 1.08MB 0B / 0B 57 bf6b5b2aeb72 kafka 1.35% 403.9MiB / 31.41GiB 1.26% 311kB / 295kB 8.19kB / 774kB 83 78c17043d1cd grafana 0.35% 112.6MiB / 31.41GiB 0.35% 19MB / 232kB 0B / 30.8MB 21 6efc85f82236 zookeeper 0.10% 85.36MiB / 31.41GiB 0.27% 58.9kB / 51.2kB 4.1kB / 369kB 62 d50761372d42 prometheus 0.00% 21.11MiB / 31.41GiB 0.07% 235kB / 10.3kB 98.3kB / 0B 13 bc5470cfcd95 postgres 0.02% 86.27MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 127kB / 158MB 26 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-21T11:47:10.409670077Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-21T11:47:10Z grafana | logger=settings t=2025-06-21T11:47:10.41004733Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-21T11:47:10.41008738Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-21T11:47:10.410127261Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-21T11:47:10.410152211Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-21T11:47:10.410215921Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-21T11:47:10.410262592Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-21T11:47:10.410301182Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-21T11:47:10.410353083Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-21T11:47:10.410380443Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-21T11:47:10.410425983Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-21T11:47:10.410454914Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-21T11:47:10.410480784Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-21T11:47:10.410522224Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-21T11:47:10.410562085Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-21T11:47:10.410587965Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-21T11:47:10.410635835Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-21T11:47:10.410669285Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-21T11:47:10.410718436Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-21T11:47:10.41110494Z level=info msg=FeatureToggles panelMonitoring=true alertingApiServer=true preinstallAutoUpdate=true pluginsDetailsRightPanel=true ssoSettingsApi=true recordedQueriesMulti=true lokiQuerySplitting=true unifiedStorageSearchPermissionFiltering=true newPDFRendering=true awsAsyncQueryCaching=true ssoSettingsSAML=true alertingRuleVersionHistoryRestore=true newDashboardSharingComponent=true azureMonitorEnableUserAuth=true alertingInsights=true cloudWatchCrossAccountQuerying=true prometheusAzureOverrideAudience=true dashboardScene=true alertingUIOptimizeReducer=true alertingSimplifiedRouting=true logsPanelControls=true prometheusUsesCombobox=true recoveryThreshold=true formatString=true angularDeprecationUI=true cloudWatchNewLabelParsing=true azureMonitorPrometheusExemplars=true groupToNestedTableTransformation=true externalCorePlugins=true alertingNotificationsStepMode=true alertingRulePermanentlyDelete=true lokiStructuredMetadata=true alertingQueryAndExpressionsStepMode=true promQLScope=true correlations=true addFieldFromCalculationStatFunctions=true dashboardSceneSolo=true grafanaconThemes=true pinNavItems=true logsContextDatasourceUi=true reportingUseRawTimeRange=true onPremToCloudMigrations=true logsInfiniteScrolling=true alertRuleRestore=true transformationsRedesign=true tlsMemcached=true dataplaneFrontendFallback=true kubernetesClientDashboardsFolders=true alertingRuleRecoverDeleted=true nestedFolders=true newFiltersUI=true failWrongDSUID=true lokiQueryHints=true logsExploreTableVisualisation=true publicDashboardsScene=true dashboardSceneForViewers=true cloudWatchRoundUpEndTime=true lokiLabelNamesQueryApi=true annotationPermissionUpdate=true useSessionStorageForRedirection=true logRowsPopoverMenu=true influxdbBackendMigration=true dashgpt=true unifiedRequestLog=true kubernetesPlaylists=true grafana | logger=sqlstore t=2025-06-21T11:47:10.411188601Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-21T11:47:10.411246612Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-21T11:47:10.412793386Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-21T11:47:10.412833567Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-21T11:47:10.413508933Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-21T11:47:10.414365521Z level=info msg="Migration successfully executed" id="create migration_log table" duration=856.048µs grafana | logger=migrator t=2025-06-21T11:47:10.421923734Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-21T11:47:10.422780252Z level=info msg="Migration successfully executed" id="create user table" duration=856.178µs grafana | logger=migrator t=2025-06-21T11:47:10.433549536Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-21T11:47:10.436178452Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=2.615355ms grafana | logger=migrator t=2025-06-21T11:47:10.475382758Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-21T11:47:10.476573459Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.177241ms grafana | logger=migrator t=2025-06-21T11:47:10.479965402Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-21T11:47:10.480751109Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=785.217µs grafana | logger=migrator t=2025-06-21T11:47:10.483752459Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-21T11:47:10.484514035Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=761.506µs grafana | logger=migrator t=2025-06-21T11:47:10.48913907Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-21T11:47:10.491570633Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.431173ms grafana | logger=migrator t=2025-06-21T11:47:10.495429961Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-21T11:47:10.49635541Z level=info msg="Migration successfully executed" id="create user table v2" duration=924.669µs grafana | logger=migrator t=2025-06-21T11:47:10.499423769Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-21T11:47:10.500277527Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=854.668µs grafana | logger=migrator t=2025-06-21T11:47:10.504314396Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-21T11:47:10.505133254Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=818.548µs grafana | logger=migrator t=2025-06-21T11:47:10.508328604Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:10.508774328Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=445.414µs grafana | logger=migrator t=2025-06-21T11:47:10.511735328Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-21T11:47:10.512319093Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=583.225µs grafana | logger=migrator t=2025-06-21T11:47:10.516302661Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-21T11:47:10.517425652Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.122711ms grafana | logger=migrator t=2025-06-21T11:47:10.520564932Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-21T11:47:10.520594222Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.69µs grafana | logger=migrator t=2025-06-21T11:47:10.523515951Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-21T11:47:10.524610031Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.09398ms grafana | logger=migrator t=2025-06-21T11:47:10.52760547Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-21T11:47:10.528030904Z level=info msg="Migration successfully executed" id="Add missing user data" duration=424.884µs grafana | logger=migrator t=2025-06-21T11:47:10.532531397Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-21T11:47:10.533657548Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.122631ms grafana | logger=migrator t=2025-06-21T11:47:10.537465575Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-21T11:47:10.538309203Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=843.368µs grafana | logger=migrator t=2025-06-21T11:47:10.541891757Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-21T11:47:10.543052398Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.160491ms grafana | logger=migrator t=2025-06-21T11:47:10.546502141Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-21T11:47:10.554435388Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.932307ms grafana | logger=migrator t=2025-06-21T11:47:10.559598248Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-21T11:47:10.561864359Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.266381ms grafana | logger=migrator t=2025-06-21T11:47:10.565734316Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-21T11:47:10.56616631Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=434.884µs grafana | logger=migrator t=2025-06-21T11:47:10.570553253Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-21T11:47:10.57137652Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=823.037µs grafana | logger=migrator t=2025-06-21T11:47:10.575323488Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-21T11:47:10.576565961Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.242423ms grafana | logger=migrator t=2025-06-21T11:47:10.581074564Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-21T11:47:10.581452427Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=377.473µs grafana | logger=migrator t=2025-06-21T11:47:10.585361975Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-21T11:47:10.586033672Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=670.637µs grafana | logger=migrator t=2025-06-21T11:47:10.589729947Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-21T11:47:10.590454284Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=723.467µs grafana | logger=migrator t=2025-06-21T11:47:10.593930147Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-21T11:47:10.594533103Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=601.436µs grafana | logger=migrator t=2025-06-21T11:47:10.598826425Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-21T11:47:10.599656382Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=829.417µs grafana | logger=migrator t=2025-06-21T11:47:10.604080576Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-21T11:47:10.604864553Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=785.357µs grafana | logger=migrator t=2025-06-21T11:47:10.609841461Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-21T11:47:10.611122703Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.280892ms grafana | logger=migrator t=2025-06-21T11:47:10.616148121Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-21T11:47:10.616973889Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=824.768µs grafana | logger=migrator t=2025-06-21T11:47:10.620352371Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-21T11:47:10.621284Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=929.189µs grafana | logger=migrator t=2025-06-21T11:47:10.625480521Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-21T11:47:10.625523691Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=44.87µs grafana | logger=migrator t=2025-06-21T11:47:10.66911583Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-21T11:47:10.670187531Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.071221ms grafana | logger=migrator t=2025-06-21T11:47:10.673704624Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-21T11:47:10.674775424Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.07045ms grafana | logger=migrator t=2025-06-21T11:47:10.678210647Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-21T11:47:10.679278668Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.067671ms grafana | logger=migrator t=2025-06-21T11:47:10.683414937Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-21T11:47:10.684124884Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=709.877µs grafana | logger=migrator t=2025-06-21T11:47:10.687313465Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:10.691185573Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.869937ms grafana | logger=migrator t=2025-06-21T11:47:10.695249032Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-21T11:47:10.696717916Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.468254ms grafana | logger=migrator t=2025-06-21T11:47:10.701654293Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-21T11:47:10.702459771Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=805.198µs grafana | logger=migrator t=2025-06-21T11:47:10.706192106Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-21T11:47:10.707279147Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.086411ms grafana | logger=migrator t=2025-06-21T11:47:10.712037373Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-21T11:47:10.713160663Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.1225ms grafana | logger=migrator t=2025-06-21T11:47:10.716555036Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-21T11:47:10.717259033Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=703.527µs grafana | logger=migrator t=2025-06-21T11:47:10.72218768Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:10.722591274Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=402.824µs grafana | logger=migrator t=2025-06-21T11:47:10.726053188Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:10.726907245Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=852.687µs grafana | logger=migrator t=2025-06-21T11:47:10.731676962Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-21T11:47:10.732115626Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=440.334µs grafana | logger=migrator t=2025-06-21T11:47:10.736270036Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-21T11:47:10.737003972Z level=info msg="Migration successfully executed" id="create star table" duration=733.266µs grafana | logger=migrator t=2025-06-21T11:47:10.740698748Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-21T11:47:10.74192395Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.228332ms grafana | logger=migrator t=2025-06-21T11:47:10.745733567Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-21T11:47:10.74818329Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.445003ms grafana | logger=migrator t=2025-06-21T11:47:10.751867525Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-21T11:47:10.753328569Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.460554ms grafana | logger=migrator t=2025-06-21T11:47:10.757745462Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-21T11:47:10.759184726Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.438864ms grafana | logger=migrator t=2025-06-21T11:47:10.762643159Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-21T11:47:10.763519097Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=876.058µs grafana | logger=migrator t=2025-06-21T11:47:10.767196963Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-21T11:47:10.768021011Z level=info msg="Migration successfully executed" id="create org table v1" duration=823.518µs grafana | logger=migrator t=2025-06-21T11:47:10.772422894Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-21T11:47:10.773229771Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=806.578µs grafana | logger=migrator t=2025-06-21T11:47:10.77833756Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-21T11:47:10.779078218Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=740.298µs grafana | logger=migrator t=2025-06-21T11:47:10.782674672Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-21T11:47:10.783842833Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.167051ms grafana | logger=migrator t=2025-06-21T11:47:10.787581479Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-21T11:47:10.789022493Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.444474ms grafana | logger=migrator t=2025-06-21T11:47:10.793163833Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-21T11:47:10.79395504Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=790.657µs grafana | logger=migrator t=2025-06-21T11:47:10.798335662Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-21T11:47:10.798361212Z level=info msg="Migration successfully executed" id="Update org table charset" duration=26.52µs grafana | logger=migrator t=2025-06-21T11:47:10.801155129Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-21T11:47:10.801200839Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=46.88µs grafana | logger=migrator t=2025-06-21T11:47:10.806061917Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-21T11:47:10.80641133Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=348.663µs grafana | logger=migrator t=2025-06-21T11:47:10.810095715Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-21T11:47:10.810863252Z level=info msg="Migration successfully executed" id="create dashboard table" duration=767.177µs grafana | logger=migrator t=2025-06-21T11:47:10.815565268Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-21T11:47:10.816432536Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=866.958µs grafana | logger=migrator t=2025-06-21T11:47:10.820262513Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-21T11:47:10.821560415Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.297152ms grafana | logger=migrator t=2025-06-21T11:47:10.825452183Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-21T11:47:10.826720656Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.266643ms grafana | logger=migrator t=2025-06-21T11:47:10.870647877Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-21T11:47:10.872050331Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.402944ms grafana | logger=migrator t=2025-06-21T11:47:10.87615449Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-21T11:47:10.877377152Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.219182ms grafana | logger=migrator t=2025-06-21T11:47:10.881205868Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-21T11:47:10.886241297Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.034769ms grafana | logger=migrator t=2025-06-21T11:47:10.891383627Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-21T11:47:10.892474478Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.090331ms grafana | logger=migrator t=2025-06-21T11:47:10.896629807Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-21T11:47:10.898071281Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.444854ms grafana | logger=migrator t=2025-06-21T11:47:10.902128879Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-21T11:47:10.903533313Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.403854ms grafana | logger=migrator t=2025-06-21T11:47:10.90836505Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:10.908848084Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=482.394µs grafana | logger=migrator t=2025-06-21T11:47:10.912080726Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-21T11:47:10.913028424Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=945.428µs grafana | logger=migrator t=2025-06-21T11:47:10.917125974Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-21T11:47:10.917152334Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=27.84µs grafana | logger=migrator t=2025-06-21T11:47:10.922012192Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-21T11:47:10.92396776Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.951808ms grafana | logger=migrator t=2025-06-21T11:47:10.927428033Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-21T11:47:10.929354152Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.925399ms grafana | logger=migrator t=2025-06-21T11:47:10.932725834Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.934597072Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.870538ms grafana | logger=migrator t=2025-06-21T11:47:10.938777343Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.939684361Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=906.848µs grafana | logger=migrator t=2025-06-21T11:47:10.943119684Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.946601777Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.524863ms grafana | logger=migrator t=2025-06-21T11:47:10.94996923Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.950785458Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=817.868µs grafana | logger=migrator t=2025-06-21T11:47:10.955050778Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-21T11:47:10.955784916Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=733.938µs grafana | logger=migrator t=2025-06-21T11:47:10.959282419Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-21T11:47:10.959314529Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.85µs grafana | logger=migrator t=2025-06-21T11:47:10.96453886Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-21T11:47:10.964607201Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=69.101µs grafana | logger=migrator t=2025-06-21T11:47:10.969536068Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.973465755Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.928377ms grafana | logger=migrator t=2025-06-21T11:47:10.977091361Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.979279022Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.186461ms grafana | logger=migrator t=2025-06-21T11:47:10.984765534Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.986990876Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.223521ms grafana | logger=migrator t=2025-06-21T11:47:10.990729892Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.992259626Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.529274ms grafana | logger=migrator t=2025-06-21T11:47:10.995144054Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-21T11:47:10.995372596Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=227.912µs grafana | logger=migrator t=2025-06-21T11:47:10.998183883Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:10.998899911Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=715.868µs grafana | logger=migrator t=2025-06-21T11:47:11.002132201Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-21T11:47:11.002967749Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=835.308µs grafana | logger=migrator t=2025-06-21T11:47:11.007790706Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-21T11:47:11.007846256Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=56.63µs grafana | logger=migrator t=2025-06-21T11:47:11.011362169Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-21T11:47:11.012271438Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=908.849µs grafana | logger=migrator t=2025-06-21T11:47:11.01562955Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-21T11:47:11.016437929Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=765.298µs grafana | logger=migrator t=2025-06-21T11:47:11.02076924Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:11.026135961Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.358811ms grafana | logger=migrator t=2025-06-21T11:47:11.059705312Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-21T11:47:11.061131616Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.425564ms grafana | logger=migrator t=2025-06-21T11:47:11.064653089Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-21T11:47:11.065967112Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.313683ms grafana | logger=migrator t=2025-06-21T11:47:11.070610247Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-21T11:47:11.071491715Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=881.158µs grafana | logger=migrator t=2025-06-21T11:47:11.074767157Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:11.07516651Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=391.943µs grafana | logger=migrator t=2025-06-21T11:47:11.0783331Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:11.078951677Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=617.957µs grafana | logger=migrator t=2025-06-21T11:47:11.082883555Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-21T11:47:11.085129276Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.220101ms grafana | logger=migrator t=2025-06-21T11:47:11.088336106Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-21T11:47:11.089153584Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=817.228µs grafana | logger=migrator t=2025-06-21T11:47:11.092364684Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-21T11:47:11.092581856Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=216.492µs grafana | logger=migrator t=2025-06-21T11:47:11.096870208Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-21T11:47:11.097167371Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=296.653µs grafana | logger=migrator t=2025-06-21T11:47:11.100372191Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-21T11:47:11.10119578Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=823.319µs grafana | logger=migrator t=2025-06-21T11:47:11.104487171Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-21T11:47:11.106643271Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.15527ms grafana | logger=migrator t=2025-06-21T11:47:11.110019874Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-21T11:47:11.112482998Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.456994ms grafana | logger=migrator t=2025-06-21T11:47:11.11693764Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-21T11:47:11.117732497Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=794.347µs grafana | logger=migrator t=2025-06-21T11:47:11.121507333Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-21T11:47:11.125038758Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.523365ms grafana | logger=migrator t=2025-06-21T11:47:11.1326744Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-21T11:47:11.135146794Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.473314ms grafana | logger=migrator t=2025-06-21T11:47:11.143941868Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-21T11:47:11.144494914Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=556.156µs grafana | logger=migrator t=2025-06-21T11:47:11.150996225Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-21T11:47:11.152728922Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=1.735677ms grafana | logger=migrator t=2025-06-21T11:47:11.15773483Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-21T11:47:11.158419796Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=682.246µs grafana | logger=migrator t=2025-06-21T11:47:11.163395604Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-21T11:47:11.163845018Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=449.394µs grafana | logger=migrator t=2025-06-21T11:47:11.168446003Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-21T11:47:11.169319401Z level=info msg="Migration successfully executed" id="create data_source table" duration=872.998µs grafana | logger=migrator t=2025-06-21T11:47:11.173931885Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-21T11:47:11.175046875Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.1171ms grafana | logger=migrator t=2025-06-21T11:47:11.183284754Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-21T11:47:11.184192563Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=907.559µs grafana | logger=migrator t=2025-06-21T11:47:11.189253321Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-21T11:47:11.189932608Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=679.167µs grafana | logger=migrator t=2025-06-21T11:47:11.19338092Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-21T11:47:11.194071438Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=690.268µs grafana | logger=migrator t=2025-06-21T11:47:11.199702462Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-21T11:47:11.206449776Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.746764ms grafana | logger=migrator t=2025-06-21T11:47:11.209987519Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-21T11:47:11.210829988Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=842.699µs grafana | logger=migrator t=2025-06-21T11:47:11.217147108Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-21T11:47:11.218608942Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.463894ms grafana | logger=migrator t=2025-06-21T11:47:11.254954399Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-21T11:47:11.256332433Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.380304ms grafana | logger=migrator t=2025-06-21T11:47:11.260742955Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-21T11:47:11.261503952Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=762.287µs grafana | logger=migrator t=2025-06-21T11:47:11.265497901Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-21T11:47:11.268115246Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.616955ms grafana | logger=migrator t=2025-06-21T11:47:11.273579958Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-21T11:47:11.276265444Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.691826ms grafana | logger=migrator t=2025-06-21T11:47:11.283570334Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-21T11:47:11.283666775Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=99.321µs grafana | logger=migrator t=2025-06-21T11:47:11.288604101Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-21T11:47:11.28949741Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=892.499µs grafana | logger=migrator t=2025-06-21T11:47:11.296956741Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-21T11:47:11.299436815Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.479814ms grafana | logger=migrator t=2025-06-21T11:47:11.3051695Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-21T11:47:11.305363321Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=194.011µs grafana | logger=migrator t=2025-06-21T11:47:11.309551012Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-21T11:47:11.309685213Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=134.321µs grafana | logger=migrator t=2025-06-21T11:47:11.313609591Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-21T11:47:11.31554188Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.932029ms grafana | logger=migrator t=2025-06-21T11:47:11.320643538Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-21T11:47:11.320784399Z level=info msg="Migration successfully executed" id="Update uid value" duration=141.121µs grafana | logger=migrator t=2025-06-21T11:47:11.324602876Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:11.325510855Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=904.599µs grafana | logger=migrator t=2025-06-21T11:47:11.332516332Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-21T11:47:11.333315539Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=798.987µs grafana | logger=migrator t=2025-06-21T11:47:11.338738951Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-21T11:47:11.341469337Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.720706ms grafana | logger=migrator t=2025-06-21T11:47:11.347848019Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-21T11:47:11.350331963Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.483604ms grafana | logger=migrator t=2025-06-21T11:47:11.355260909Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-21T11:47:11.355279389Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=19.42µs grafana | logger=migrator t=2025-06-21T11:47:11.359790952Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-21T11:47:11.360847852Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.05719ms grafana | logger=migrator t=2025-06-21T11:47:11.366644668Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-21T11:47:11.36793848Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.293352ms grafana | logger=migrator t=2025-06-21T11:47:11.373457063Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-21T11:47:11.374198121Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=741.048µs grafana | logger=migrator t=2025-06-21T11:47:11.378540522Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-21T11:47:11.379298729Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=757.677µs grafana | logger=migrator t=2025-06-21T11:47:11.384862172Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-21T11:47:11.385576379Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=714.477µs grafana | logger=migrator t=2025-06-21T11:47:11.388747859Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-21T11:47:11.389486416Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=738.827µs grafana | logger=migrator t=2025-06-21T11:47:11.395135291Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-21T11:47:11.397061169Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.919898ms grafana | logger=migrator t=2025-06-21T11:47:11.400171779Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-21T11:47:11.40557513Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.402311ms grafana | logger=migrator t=2025-06-21T11:47:11.409141855Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-21T11:47:11.409874191Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=732.226µs grafana | logger=migrator t=2025-06-21T11:47:11.441206751Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-21T11:47:11.442032049Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=827.448µs grafana | logger=migrator t=2025-06-21T11:47:11.447948596Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-21T11:47:11.448504611Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=556.005µs grafana | logger=migrator t=2025-06-21T11:47:11.452993424Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-21T11:47:11.453542449Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=549.015µs grafana | logger=migrator t=2025-06-21T11:47:11.458102482Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:11.458332325Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=229.613µs grafana | logger=migrator t=2025-06-21T11:47:11.46405807Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-21T11:47:11.464459274Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=398.354µs grafana | logger=migrator t=2025-06-21T11:47:11.469267929Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-21T11:47:11.469289089Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.61µs grafana | logger=migrator t=2025-06-21T11:47:11.473003945Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-21T11:47:11.474836912Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.831337ms grafana | logger=migrator t=2025-06-21T11:47:11.479986472Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-21T11:47:11.481797489Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.809037ms grafana | logger=migrator t=2025-06-21T11:47:11.485623526Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-21T11:47:11.485737107Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=113.671µs grafana | logger=migrator t=2025-06-21T11:47:11.488969308Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-21T11:47:11.490782745Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.813157ms grafana | logger=migrator t=2025-06-21T11:47:11.495809753Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-21T11:47:11.498347478Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.537495ms grafana | logger=migrator t=2025-06-21T11:47:11.50281061Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-21T11:47:11.503602208Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=793.688µs grafana | logger=migrator t=2025-06-21T11:47:11.50804753Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-21T11:47:11.508512044Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=464.334µs grafana | logger=migrator t=2025-06-21T11:47:11.512455293Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-21T11:47:11.513095089Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=640.376µs grafana | logger=migrator t=2025-06-21T11:47:11.520941843Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-21T11:47:11.52155745Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=615.357µs grafana | logger=migrator t=2025-06-21T11:47:11.525127873Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-21T11:47:11.527185814Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=2.056961ms grafana | logger=migrator t=2025-06-21T11:47:11.532509405Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-21T11:47:11.533422453Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=913.448µs grafana | logger=migrator t=2025-06-21T11:47:11.538928396Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-21T11:47:11.538946256Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=18.86µs grafana | logger=migrator t=2025-06-21T11:47:11.543265937Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-21T11:47:11.543291117Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=65.95µs grafana | logger=migrator t=2025-06-21T11:47:11.547838182Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-21T11:47:11.552877079Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=5.036717ms grafana | logger=migrator t=2025-06-21T11:47:11.557992228Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-21T11:47:11.560783395Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.790537ms grafana | logger=migrator t=2025-06-21T11:47:11.566426689Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-21T11:47:11.566445779Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=20.02µs grafana | logger=migrator t=2025-06-21T11:47:11.57070836Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-21T11:47:11.571508687Z level=info msg="Migration successfully executed" id="create quota table v1" duration=799.537µs grafana | logger=migrator t=2025-06-21T11:47:11.576078722Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-21T11:47:11.577461255Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.381953ms grafana | logger=migrator t=2025-06-21T11:47:11.583041538Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-21T11:47:11.583069268Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.92µs grafana | logger=migrator t=2025-06-21T11:47:11.586085177Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-21T11:47:11.586953396Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=867.789µs grafana | logger=migrator t=2025-06-21T11:47:11.591432938Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-21T11:47:11.592298046Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=864.668µs grafana | logger=migrator t=2025-06-21T11:47:11.597749108Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-21T11:47:11.601175872Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.426024ms grafana | logger=migrator t=2025-06-21T11:47:11.642596857Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-21T11:47:11.642634327Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=38.72µs grafana | logger=migrator t=2025-06-21T11:47:11.653098068Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-21T11:47:11.653564152Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=467.944µs grafana | logger=migrator t=2025-06-21T11:47:11.65651116Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-21T11:47:11.663916171Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=7.404491ms grafana | logger=migrator t=2025-06-21T11:47:11.667692817Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-21T11:47:11.668546956Z level=info msg="Migration successfully executed" id="create session table" duration=853.689µs grafana | logger=migrator t=2025-06-21T11:47:11.672788246Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-21T11:47:11.673096749Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=308.063µs grafana | logger=migrator t=2025-06-21T11:47:11.677148618Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-21T11:47:11.677229999Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=81.471µs grafana | logger=migrator t=2025-06-21T11:47:11.681688861Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-21T11:47:11.682418819Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=729.228µs grafana | logger=migrator t=2025-06-21T11:47:11.685413417Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-21T11:47:11.686217564Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=803.147µs grafana | logger=migrator t=2025-06-21T11:47:11.690562517Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-21T11:47:11.690586807Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=24.7µs grafana | logger=migrator t=2025-06-21T11:47:11.695989748Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-21T11:47:11.696025938Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=36.07µs grafana | logger=migrator t=2025-06-21T11:47:11.701206808Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-21T11:47:11.705441408Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.23492ms grafana | logger=migrator t=2025-06-21T11:47:11.710211564Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-21T11:47:11.713421075Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.208821ms grafana | logger=migrator t=2025-06-21T11:47:11.718786366Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-21T11:47:11.718986998Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=199.912µs grafana | logger=migrator t=2025-06-21T11:47:11.725354378Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-21T11:47:11.725604692Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=253.354µs grafana | logger=migrator t=2025-06-21T11:47:11.730871691Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-21T11:47:11.731811921Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=938.98µs grafana | logger=migrator t=2025-06-21T11:47:11.736748818Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-21T11:47:11.736777259Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.921µs grafana | logger=migrator t=2025-06-21T11:47:11.74226137Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-21T11:47:11.745580753Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.318593ms grafana | logger=migrator t=2025-06-21T11:47:11.908795963Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-21T11:47:11.909297319Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=501.696µs grafana | logger=migrator t=2025-06-21T11:47:12.040909509Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-21T11:47:12.045518404Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.609765ms grafana | logger=migrator t=2025-06-21T11:47:12.180236301Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-21T11:47:12.182930337Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.696846ms grafana | logger=migrator t=2025-06-21T11:47:12.19352851Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-21T11:47:12.19356025Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=34.17µs grafana | logger=migrator t=2025-06-21T11:47:12.197452458Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-21T11:47:12.198519967Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.067079ms grafana | logger=migrator t=2025-06-21T11:47:12.204751228Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-21T11:47:12.206509375Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.759697ms grafana | logger=migrator t=2025-06-21T11:47:12.210183421Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-21T11:47:12.212073578Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.889837ms grafana | logger=migrator t=2025-06-21T11:47:12.215681864Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-21T11:47:12.216572482Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=889.958µs grafana | logger=migrator t=2025-06-21T11:47:12.220830443Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-21T11:47:12.222161035Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.330552ms grafana | logger=migrator t=2025-06-21T11:47:12.226988092Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-21T11:47:12.228557217Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.570475ms grafana | logger=migrator t=2025-06-21T11:47:12.233493104Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-21T11:47:12.235578985Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=2.084651ms grafana | logger=migrator t=2025-06-21T11:47:12.239794336Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-21T11:47:12.240863866Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.0715ms grafana | logger=migrator t=2025-06-21T11:47:12.243962176Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-21T11:47:12.244902364Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=939.498µs grafana | logger=migrator t=2025-06-21T11:47:12.250298276Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-21T11:47:12.263734306Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.4395ms grafana | logger=migrator t=2025-06-21T11:47:12.267617994Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-21T11:47:12.268208239Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=590.485µs grafana | logger=migrator t=2025-06-21T11:47:12.272991516Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-21T11:47:12.274058955Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.068529ms grafana | logger=migrator t=2025-06-21T11:47:12.27861552Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:12.278910143Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=294.573µs grafana | logger=migrator t=2025-06-21T11:47:12.282103633Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-21T11:47:12.282759509Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=655.506µs grafana | logger=migrator t=2025-06-21T11:47:12.28585499Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-21T11:47:12.286628647Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=773.667µs grafana | logger=migrator t=2025-06-21T11:47:12.290722176Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-21T11:47:12.295714025Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.992409ms grafana | logger=migrator t=2025-06-21T11:47:12.299543851Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-21T11:47:12.304442158Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.897027ms grafana | logger=migrator t=2025-06-21T11:47:12.308514078Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-21T11:47:12.311952981Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.438452ms grafana | logger=migrator t=2025-06-21T11:47:12.317272992Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-21T11:47:12.320738816Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.465364ms grafana | logger=migrator t=2025-06-21T11:47:12.32436037Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-21T11:47:12.325487581Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.126511ms grafana | logger=migrator t=2025-06-21T11:47:12.332797121Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-21T11:47:12.332833392Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=37.531µs grafana | logger=migrator t=2025-06-21T11:47:12.369661048Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-21T11:47:12.369711768Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=53.591µs grafana | logger=migrator t=2025-06-21T11:47:12.374733416Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-21T11:47:12.375664715Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=931.339µs grafana | logger=migrator t=2025-06-21T11:47:12.380844315Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-21T11:47:12.381632352Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=788.307µs grafana | logger=migrator t=2025-06-21T11:47:12.388295107Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-21T11:47:12.389200506Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=908.469µs grafana | logger=migrator t=2025-06-21T11:47:12.393268284Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-21T11:47:12.394170013Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=900.439µs grafana | logger=migrator t=2025-06-21T11:47:12.400317632Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-21T11:47:12.401975689Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.657307ms grafana | logger=migrator t=2025-06-21T11:47:12.40523786Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-21T11:47:12.409081157Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.842617ms grafana | logger=migrator t=2025-06-21T11:47:12.412070336Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-21T11:47:12.415849153Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.779086ms grafana | logger=migrator t=2025-06-21T11:47:12.421404936Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-21T11:47:12.421586788Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=182.702µs grafana | logger=migrator t=2025-06-21T11:47:12.426787017Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:12.427693336Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=903.859µs grafana | logger=migrator t=2025-06-21T11:47:12.430804016Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-21T11:47:12.431607564Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=803.318µs grafana | logger=migrator t=2025-06-21T11:47:12.434568942Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-21T11:47:12.438293009Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.724817ms grafana | logger=migrator t=2025-06-21T11:47:12.443829752Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-21T11:47:12.443847682Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=18.89µs grafana | logger=migrator t=2025-06-21T11:47:12.447112584Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-21T11:47:12.447948871Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=836.117µs grafana | logger=migrator t=2025-06-21T11:47:12.450955471Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-21T11:47:12.452450385Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.494204ms grafana | logger=migrator t=2025-06-21T11:47:12.460759605Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-21T11:47:12.460938577Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=181.462µs grafana | logger=migrator t=2025-06-21T11:47:12.466259118Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-21T11:47:12.467305848Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.04706ms grafana | logger=migrator t=2025-06-21T11:47:12.472789891Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-21T11:47:12.47368151Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=891.079µs grafana | logger=migrator t=2025-06-21T11:47:12.476995401Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-21T11:47:12.47787226Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=876.489µs grafana | logger=migrator t=2025-06-21T11:47:12.48101592Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-21T11:47:12.481860829Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=844.579µs grafana | logger=migrator t=2025-06-21T11:47:12.486086419Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-21T11:47:12.487015317Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=928.218µs grafana | logger=migrator t=2025-06-21T11:47:12.490996466Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-21T11:47:12.491881385Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=884.359µs grafana | logger=migrator t=2025-06-21T11:47:12.495112136Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-21T11:47:12.495136916Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=25.28µs grafana | logger=migrator t=2025-06-21T11:47:12.499193005Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.503488247Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.294802ms grafana | logger=migrator t=2025-06-21T11:47:12.508903939Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-21T11:47:12.509644306Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=739.337µs grafana | logger=migrator t=2025-06-21T11:47:12.512767196Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.51630477Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.537234ms grafana | logger=migrator t=2025-06-21T11:47:12.520560411Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-21T11:47:12.521163797Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=603.046µs grafana | logger=migrator t=2025-06-21T11:47:12.525021585Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-21T11:47:12.526038174Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.016069ms grafana | logger=migrator t=2025-06-21T11:47:12.530587437Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-21T11:47:12.532522476Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.937379ms grafana | logger=migrator t=2025-06-21T11:47:12.566630255Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-21T11:47:12.580338377Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.709152ms grafana | logger=migrator t=2025-06-21T11:47:12.58481387Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-21T11:47:12.585477527Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=663.777µs grafana | logger=migrator t=2025-06-21T11:47:12.589661227Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-21T11:47:12.590526915Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=862.598µs grafana | logger=migrator t=2025-06-21T11:47:12.637089194Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-21T11:47:12.637521528Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=432.894µs grafana | logger=migrator t=2025-06-21T11:47:12.644877539Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-21T11:47:12.645489255Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=611.076µs grafana | logger=migrator t=2025-06-21T11:47:12.651366062Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-21T11:47:12.651788536Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=421.664µs grafana | logger=migrator t=2025-06-21T11:47:12.655438781Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.661405159Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.966197ms grafana | logger=migrator t=2025-06-21T11:47:12.666221134Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.670363884Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.14216ms grafana | logger=migrator t=2025-06-21T11:47:12.673650547Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.674574185Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=923.288µs grafana | logger=migrator t=2025-06-21T11:47:12.677905208Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.678846826Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=941.208µs grafana | logger=migrator t=2025-06-21T11:47:12.68332465Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-21T11:47:12.683631403Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=305.983µs grafana | logger=migrator t=2025-06-21T11:47:12.687149476Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-21T11:47:12.693453837Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.302061ms grafana | logger=migrator t=2025-06-21T11:47:12.697308935Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-21T11:47:12.697990311Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=681.176µs grafana | logger=migrator t=2025-06-21T11:47:12.701654737Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-21T11:47:12.701844878Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=189.691µs grafana | logger=migrator t=2025-06-21T11:47:12.704075279Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-21T11:47:12.704541134Z level=info msg="Migration successfully executed" id="Move region to single row" duration=465.355µs grafana | logger=migrator t=2025-06-21T11:47:12.707670374Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.708543853Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=873.109µs grafana | logger=migrator t=2025-06-21T11:47:12.712559741Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.71343694Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=876.869µs grafana | logger=migrator t=2025-06-21T11:47:12.716560379Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.717508569Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=947.91µs grafana | logger=migrator t=2025-06-21T11:47:12.751427666Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.75294518Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.517374ms grafana | logger=migrator t=2025-06-21T11:47:12.759982518Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.761274271Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.291383ms grafana | logger=migrator t=2025-06-21T11:47:12.766201188Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-21T11:47:12.767530111Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.330303ms grafana | logger=migrator t=2025-06-21T11:47:12.77269058Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-21T11:47:12.77271651Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=27.55µs grafana | logger=migrator t=2025-06-21T11:47:12.777995572Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-21T11:47:12.778012572Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=18.09µs grafana | logger=migrator t=2025-06-21T11:47:12.782823119Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-21T11:47:12.782847879Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=25.54µs grafana | logger=migrator t=2025-06-21T11:47:12.788498094Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-21T11:47:12.789349891Z level=info msg="Migration successfully executed" id="create test_data table" duration=851.547µs grafana | logger=migrator t=2025-06-21T11:47:12.842684225Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-21T11:47:12.843585174Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=902.399µs grafana | logger=migrator t=2025-06-21T11:47:12.848752974Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-21T11:47:12.850528391Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.775297ms grafana | logger=migrator t=2025-06-21T11:47:12.855003264Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-21T11:47:12.855947833Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=944.309µs grafana | logger=migrator t=2025-06-21T11:47:12.859363496Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-21T11:47:12.859569748Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=206.502µs grafana | logger=migrator t=2025-06-21T11:47:12.862587848Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-21T11:47:12.862961651Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=373.943µs grafana | logger=migrator t=2025-06-21T11:47:12.866246682Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-21T11:47:12.866262782Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=14.43µs grafana | logger=migrator t=2025-06-21T11:47:12.870386462Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-21T11:47:12.877752373Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=7.361291ms grafana | logger=migrator t=2025-06-21T11:47:12.881793952Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-21T11:47:12.883229626Z level=info msg="Migration successfully executed" id="create team table" duration=1.442684ms grafana | logger=migrator t=2025-06-21T11:47:12.888941781Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-21T11:47:12.88991843Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=976.269µs grafana | logger=migrator t=2025-06-21T11:47:12.892892679Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-21T11:47:12.894292672Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.398513ms grafana | logger=migrator t=2025-06-21T11:47:12.899771045Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-21T11:47:12.906853193Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.082798ms grafana | logger=migrator t=2025-06-21T11:47:12.91062631Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-21T11:47:12.910829202Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=202.892µs grafana | logger=migrator t=2025-06-21T11:47:12.945509066Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:12.947461535Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.952219ms grafana | logger=migrator t=2025-06-21T11:47:12.950906818Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-21T11:47:12.957460891Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=6.554583ms grafana | logger=migrator t=2025-06-21T11:47:12.963204356Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-21T11:47:12.967775541Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=4.570275ms grafana | logger=migrator t=2025-06-21T11:47:12.971938751Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-21T11:47:12.972885129Z level=info msg="Migration successfully executed" id="create team member table" duration=946.228µs grafana | logger=migrator t=2025-06-21T11:47:12.976799837Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-21T11:47:12.97810023Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.300003ms grafana | logger=migrator t=2025-06-21T11:47:12.984341461Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-21T11:47:12.985288619Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=947.288µs grafana | logger=migrator t=2025-06-21T11:47:12.987990135Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-21T11:47:12.988918895Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=928.53µs grafana | logger=migrator t=2025-06-21T11:47:12.992758972Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-21T11:47:12.997656909Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.897347ms grafana | logger=migrator t=2025-06-21T11:47:13.004390673Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-21T11:47:13.011000237Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.609354ms grafana | logger=migrator t=2025-06-21T11:47:13.082657274Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-21T11:47:13.088266657Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.609253ms grafana | logger=migrator t=2025-06-21T11:47:13.129086918Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-21T11:47:13.138031353Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=8.946265ms grafana | logger=migrator t=2025-06-21T11:47:13.143441745Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-21T11:47:13.152729955Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=9.29205ms grafana | logger=migrator t=2025-06-21T11:47:13.158435349Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-21T11:47:13.160693671Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.257732ms grafana | logger=migrator t=2025-06-21T11:47:13.166618308Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-21T11:47:13.167272503Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=654.245µs grafana | logger=migrator t=2025-06-21T11:47:13.170343153Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-21T11:47:13.170978399Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=634.786µs grafana | logger=migrator t=2025-06-21T11:47:13.173793366Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-21T11:47:13.174655714Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=862.648µs grafana | logger=migrator t=2025-06-21T11:47:13.179790143Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-21T11:47:13.180670592Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=880.189µs grafana | logger=migrator t=2025-06-21T11:47:13.186319886Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-21T11:47:13.188240634Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.915898ms grafana | logger=migrator t=2025-06-21T11:47:13.191810649Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-21T11:47:13.192736877Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=926.308µs grafana | logger=migrator t=2025-06-21T11:47:13.198096379Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-21T11:47:13.198566634Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=470.195µs grafana | logger=migrator t=2025-06-21T11:47:13.202970536Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-21T11:47:13.203228078Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=257.352µs grafana | logger=migrator t=2025-06-21T11:47:13.208881692Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-21T11:47:13.209607949Z level=info msg="Migration successfully executed" id="create tag table" duration=728.867µs grafana | logger=migrator t=2025-06-21T11:47:13.213752208Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-21T11:47:13.214421045Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=668.857µs grafana | logger=migrator t=2025-06-21T11:47:13.21914058Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-21T11:47:13.219947908Z level=info msg="Migration successfully executed" id="create login attempt table" duration=809.558µs grafana | logger=migrator t=2025-06-21T11:47:13.237755489Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-21T11:47:13.238633377Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=880.118µs grafana | logger=migrator t=2025-06-21T11:47:13.24831741Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-21T11:47:13.249441631Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.125971ms grafana | logger=migrator t=2025-06-21T11:47:13.255197046Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:13.266265152Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.070486ms grafana | logger=migrator t=2025-06-21T11:47:13.271719224Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-21T11:47:13.272249679Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=530.375µs grafana | logger=migrator t=2025-06-21T11:47:13.276285417Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-21T11:47:13.277685351Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.402054ms grafana | logger=migrator t=2025-06-21T11:47:13.283335355Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:13.283758629Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=424.224µs grafana | logger=migrator t=2025-06-21T11:47:13.289485984Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:13.290479923Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=995.549µs grafana | logger=migrator t=2025-06-21T11:47:13.299725752Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-21T11:47:13.300807433Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.078621ms grafana | logger=migrator t=2025-06-21T11:47:13.337473284Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-21T11:47:13.338748656Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.277092ms grafana | logger=migrator t=2025-06-21T11:47:13.342677624Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-21T11:47:13.342702204Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=27.101µs grafana | logger=migrator t=2025-06-21T11:47:13.348306428Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.35277384Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.466672ms grafana | logger=migrator t=2025-06-21T11:47:13.357552606Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.363771086Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.21771ms grafana | logger=migrator t=2025-06-21T11:47:13.369125736Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.375585859Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.458623ms grafana | logger=migrator t=2025-06-21T11:47:13.381056911Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.388140109Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=7.082318ms grafana | logger=migrator t=2025-06-21T11:47:13.392375689Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.393058655Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=682.896µs grafana | logger=migrator t=2025-06-21T11:47:13.401340985Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.408481343Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.025127ms grafana | logger=migrator t=2025-06-21T11:47:13.411998597Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-21T11:47:13.416874663Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=4.875516ms grafana | logger=migrator t=2025-06-21T11:47:13.421897062Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-21T11:47:13.422932312Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.03837ms grafana | logger=migrator t=2025-06-21T11:47:13.429528726Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-21T11:47:13.430496184Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=967.658µs grafana | logger=migrator t=2025-06-21T11:47:13.433951338Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-21T11:47:13.434917706Z level=info msg="Migration successfully executed" id="create user auth token table" duration=965.958µs grafana | logger=migrator t=2025-06-21T11:47:13.440634061Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-21T11:47:13.442325338Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.690587ms grafana | logger=migrator t=2025-06-21T11:47:13.44573651Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-21T11:47:13.447564037Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.826767ms grafana | logger=migrator t=2025-06-21T11:47:13.455130471Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-21T11:47:13.456797556Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.666955ms grafana | logger=migrator t=2025-06-21T11:47:13.461646072Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-21T11:47:13.465753292Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.10555ms grafana | logger=migrator t=2025-06-21T11:47:13.470132675Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-21T11:47:13.470875311Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=742.726µs grafana | logger=migrator t=2025-06-21T11:47:13.47489122Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-21T11:47:13.479000889Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=4.105579ms grafana | logger=migrator t=2025-06-21T11:47:13.545224893Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-21T11:47:13.546157262Z level=info msg="Migration successfully executed" id="create cache_data table" duration=935.709µs grafana | logger=migrator t=2025-06-21T11:47:13.549294922Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-21T11:47:13.549957349Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=662.687µs grafana | logger=migrator t=2025-06-21T11:47:13.554534102Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-21T11:47:13.555125628Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=591.386µs grafana | logger=migrator t=2025-06-21T11:47:13.557896224Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-21T11:47:13.558555071Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=658.567µs grafana | logger=migrator t=2025-06-21T11:47:13.561355198Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-21T11:47:13.561371028Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=16.05µs grafana | logger=migrator t=2025-06-21T11:47:13.568046061Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-21T11:47:13.568211034Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=165.843µs grafana | logger=migrator t=2025-06-21T11:47:13.571563015Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-21T11:47:13.572648906Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.087471ms grafana | logger=migrator t=2025-06-21T11:47:13.576008498Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-21T11:47:13.576983907Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=975.459µs grafana | logger=migrator t=2025-06-21T11:47:13.579999406Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-21T11:47:13.581002106Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.00272ms grafana | logger=migrator t=2025-06-21T11:47:13.58566574Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-21T11:47:13.58568225Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=17.38µs grafana | logger=migrator t=2025-06-21T11:47:13.58875274Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-21T11:47:13.589478317Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=725.917µs grafana | logger=migrator t=2025-06-21T11:47:13.594765998Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-21T11:47:13.595482284Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=716.156µs grafana | logger=migrator t=2025-06-21T11:47:13.59812324Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-21T11:47:13.598821166Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=698.106µs grafana | logger=migrator t=2025-06-21T11:47:13.601497612Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-21T11:47:13.602195899Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=697.837µs grafana | logger=migrator t=2025-06-21T11:47:13.608291717Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-21T11:47:13.614197873Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.905546ms grafana | logger=migrator t=2025-06-21T11:47:13.61906015Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-21T11:47:13.61998609Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=925.25µs grafana | logger=migrator t=2025-06-21T11:47:13.626187098Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-21T11:47:13.62637437Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=186.682µs grafana | logger=migrator t=2025-06-21T11:47:13.630793972Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-21T11:47:13.631868193Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.074991ms grafana | logger=migrator t=2025-06-21T11:47:13.637258814Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-21T11:47:13.638245894Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=987.03µs grafana | logger=migrator t=2025-06-21T11:47:13.641368724Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-21T11:47:13.642300452Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=931.648µs grafana | logger=migrator t=2025-06-21T11:47:13.645355502Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-21T11:47:13.645375553Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=19.811µs grafana | logger=migrator t=2025-06-21T11:47:13.650123927Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-21T11:47:13.650743174Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=619.447µs grafana | logger=migrator t=2025-06-21T11:47:13.654367678Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-21T11:47:13.655063575Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=695.507µs grafana | logger=migrator t=2025-06-21T11:47:13.659048333Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-21T11:47:13.660636459Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.587916ms grafana | logger=migrator t=2025-06-21T11:47:13.663872939Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-21T11:47:13.665368564Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.492144ms grafana | logger=migrator t=2025-06-21T11:47:13.668837466Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.675295078Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.455972ms grafana | logger=migrator t=2025-06-21T11:47:13.680462859Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.681383167Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=920.358µs grafana | logger=migrator t=2025-06-21T11:47:13.68585427Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.687244093Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.389103ms grafana | logger=migrator t=2025-06-21T11:47:13.76216222Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.790272129Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.108789ms grafana | logger=migrator t=2025-06-21T11:47:13.795718322Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.825643618Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.924796ms grafana | logger=migrator t=2025-06-21T11:47:13.829658107Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.830687237Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.029481ms grafana | logger=migrator t=2025-06-21T11:47:13.834064679Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.835057408Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=992.599µs grafana | logger=migrator t=2025-06-21T11:47:13.842031095Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-21T11:47:13.848364946Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.336201ms grafana | logger=migrator t=2025-06-21T11:47:13.851780678Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-21T11:47:13.857369632Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.588104ms grafana | logger=migrator t=2025-06-21T11:47:13.963207196Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:13.964468158Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.262532ms grafana | logger=migrator t=2025-06-21T11:47:14.0554237Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-21T11:47:14.05747062Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.04925ms grafana | logger=migrator t=2025-06-21T11:47:14.184367752Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-21T11:47:14.18628049Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.919478ms grafana | logger=migrator t=2025-06-21T11:47:14.235920166Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-21T11:47:14.237830365Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.911569ms grafana | logger=migrator t=2025-06-21T11:47:14.285223209Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-21T11:47:14.285260539Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=40.11µs grafana | logger=migrator t=2025-06-21T11:47:14.290110117Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.298364668Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.281842ms grafana | logger=migrator t=2025-06-21T11:47:14.301292846Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.3057814Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.488014ms grafana | logger=migrator t=2025-06-21T11:47:14.311215353Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.31600128Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.785197ms grafana | logger=migrator t=2025-06-21T11:47:14.348040564Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-21T11:47:14.349583639Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.543345ms grafana | logger=migrator t=2025-06-21T11:47:14.354456747Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-21T11:47:14.356147083Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.685996ms grafana | logger=migrator t=2025-06-21T11:47:14.361216573Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.368428983Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.21815ms grafana | logger=migrator t=2025-06-21T11:47:14.372722495Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.380748444Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.023559ms grafana | logger=migrator t=2025-06-21T11:47:14.386361979Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-21T11:47:14.387813483Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.453064ms grafana | logger=migrator t=2025-06-21T11:47:14.393243597Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:14.402336395Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.093658ms grafana | logger=migrator t=2025-06-21T11:47:14.407286144Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:14.412488445Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.202041ms grafana | logger=migrator t=2025-06-21T11:47:14.416712307Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:14.416736327Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=19.86µs grafana | logger=migrator t=2025-06-21T11:47:14.420813937Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:14.422275961Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.463174ms grafana | logger=migrator t=2025-06-21T11:47:14.429023806Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-21T11:47:14.430103167Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.078691ms grafana | logger=migrator t=2025-06-21T11:47:14.43546706Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-21T11:47:14.436565781Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.098681ms grafana | logger=migrator t=2025-06-21T11:47:14.439898503Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-21T11:47:14.439918843Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=21.1µs grafana | logger=migrator t=2025-06-21T11:47:14.44268265Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:14.447397177Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.714887ms grafana | logger=migrator t=2025-06-21T11:47:14.451886661Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:14.458607207Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.722186ms grafana | logger=migrator t=2025-06-21T11:47:14.461786157Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:14.468064918Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.278291ms grafana | logger=migrator t=2025-06-21T11:47:14.471365412Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:14.480827943Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.463341ms grafana | logger=migrator t=2025-06-21T11:47:14.486102835Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-21T11:47:14.493035183Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.932128ms grafana | logger=migrator t=2025-06-21T11:47:14.495905301Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:14.495924251Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=19.75µs grafana | logger=migrator t=2025-06-21T11:47:14.499327755Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-21T11:47:14.500178373Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=850.108µs grafana | logger=migrator t=2025-06-21T11:47:14.533320818Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.543445537Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.125239ms grafana | logger=migrator t=2025-06-21T11:47:14.546615537Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-21T11:47:14.546632518Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=17.881µs grafana | logger=migrator t=2025-06-21T11:47:14.549099692Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.555470455Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.370193ms grafana | logger=migrator t=2025-06-21T11:47:14.558603405Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-21T11:47:14.559713787Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.110081ms grafana | logger=migrator t=2025-06-21T11:47:14.564049509Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.570418841Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.368672ms grafana | logger=migrator t=2025-06-21T11:47:14.573261868Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-21T11:47:14.573869915Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=607.897µs grafana | logger=migrator t=2025-06-21T11:47:14.576718023Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-21T11:47:14.57749691Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=778.537µs grafana | logger=migrator t=2025-06-21T11:47:14.582194496Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.5917695Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.576044ms grafana | logger=migrator t=2025-06-21T11:47:14.594530568Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-21T11:47:14.595119843Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=590.256µs grafana | logger=migrator t=2025-06-21T11:47:14.598026201Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-21T11:47:14.59884645Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=820.299µs grafana | logger=migrator t=2025-06-21T11:47:14.603459544Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-21T11:47:14.604325013Z level=info msg="Migration successfully executed" id="create alert_image table" duration=865.739µs grafana | logger=migrator t=2025-06-21T11:47:14.607536624Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-21T11:47:14.608979898Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.438224ms grafana | logger=migrator t=2025-06-21T11:47:14.61226584Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-21T11:47:14.61229122Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=26.51µs grafana | logger=migrator t=2025-06-21T11:47:14.617192919Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-21T11:47:14.618091547Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=897.498µs grafana | logger=migrator t=2025-06-21T11:47:14.621532221Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.623106317Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.574206ms grafana | logger=migrator t=2025-06-21T11:47:14.626720113Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-21T11:47:14.62757925Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-21T11:47:14.632752161Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-21T11:47:14.63361917Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=869.699µs grafana | logger=migrator t=2025-06-21T11:47:14.638475917Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-21T11:47:14.640091713Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.615556ms grafana | logger=migrator t=2025-06-21T11:47:14.643380875Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-21T11:47:14.650165201Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.780806ms grafana | logger=migrator t=2025-06-21T11:47:14.653255203Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-21T11:47:14.654236261Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=981.019µs grafana | logger=migrator t=2025-06-21T11:47:14.658529903Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-21T11:47:14.659901247Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.369684ms grafana | logger=migrator t=2025-06-21T11:47:14.663374631Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-21T11:47:14.664735615Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.360493ms grafana | logger=migrator t=2025-06-21T11:47:14.669502341Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-21T11:47:14.670479121Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=976.73µs grafana | logger=migrator t=2025-06-21T11:47:14.673756582Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:14.674741812Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=984.71µs grafana | logger=migrator t=2025-06-21T11:47:14.677903553Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-21T11:47:14.677940743Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=38.57µs grafana | logger=migrator t=2025-06-21T11:47:14.682899022Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-21T11:47:14.682929743Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=28.261µs grafana | logger=migrator t=2025-06-21T11:47:14.687724989Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-21T11:47:14.694842259Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=7.11688ms grafana | logger=migrator t=2025-06-21T11:47:14.726877942Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-21T11:47:14.727457799Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=580.907µs grafana | logger=migrator t=2025-06-21T11:47:14.731311616Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-21T11:47:14.733039933Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.727167ms grafana | logger=migrator t=2025-06-21T11:47:14.738048662Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-21T11:47:14.738439365Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=390.793µs grafana | logger=migrator t=2025-06-21T11:47:14.742298993Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-21T11:47:14.743295024Z level=info msg="Migration successfully executed" id="create data_keys table" duration=993.511µs grafana | logger=migrator t=2025-06-21T11:47:14.746761747Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-21T11:47:14.747604126Z level=info msg="Migration successfully executed" id="create secrets table" duration=842.059µs grafana | logger=migrator t=2025-06-21T11:47:14.751899618Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-21T11:47:14.785824949Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.926241ms grafana | logger=migrator t=2025-06-21T11:47:14.791794148Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-21T11:47:14.796964758Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.17028ms grafana | logger=migrator t=2025-06-21T11:47:14.811500222Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-21T11:47:14.811717694Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=218.072µs grafana | logger=migrator t=2025-06-21T11:47:14.815124046Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-21T11:47:14.8502758Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.155614ms grafana | logger=migrator t=2025-06-21T11:47:14.855589092Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-21T11:47:14.887279353Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.689911ms grafana | logger=migrator t=2025-06-21T11:47:14.921206905Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-21T11:47:14.922655229Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.447764ms grafana | logger=migrator t=2025-06-21T11:47:14.926404656Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-21T11:47:14.928139543Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.734017ms grafana | logger=migrator t=2025-06-21T11:47:14.933450015Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-21T11:47:14.933664927Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=214.652µs grafana | logger=migrator t=2025-06-21T11:47:14.936877848Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-21T11:47:14.938151131Z level=info msg="Migration successfully executed" id="create permission table" duration=1.271843ms grafana | logger=migrator t=2025-06-21T11:47:14.941889168Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-21T11:47:14.943589024Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.700006ms grafana | logger=migrator t=2025-06-21T11:47:14.949204699Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-21T11:47:14.95029652Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.091851ms grafana | logger=migrator t=2025-06-21T11:47:14.953490291Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-21T11:47:14.954719573Z level=info msg="Migration successfully executed" id="create role table" duration=1.260902ms grafana | logger=migrator t=2025-06-21T11:47:14.958202987Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-21T11:47:14.965185975Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.983198ms grafana | logger=migrator t=2025-06-21T11:47:14.969917492Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-21T11:47:14.976744619Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.825877ms grafana | logger=migrator t=2025-06-21T11:47:14.980300473Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-21T11:47:14.981692777Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.421984ms grafana | logger=migrator t=2025-06-21T11:47:14.984931068Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-21T11:47:14.98608029Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.148792ms grafana | logger=migrator t=2025-06-21T11:47:14.99114938Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:14.992270741Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.121141ms grafana | logger=migrator t=2025-06-21T11:47:14.995661854Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-21T11:47:14.996598653Z level=info msg="Migration successfully executed" id="create team role table" duration=936.349µs grafana | logger=migrator t=2025-06-21T11:47:15.001346469Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-21T11:47:15.002488041Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.138462ms grafana | logger=migrator t=2025-06-21T11:47:15.007436948Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-21T11:47:15.008895862Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.458494ms grafana | logger=migrator t=2025-06-21T11:47:15.012133353Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-21T11:47:15.013290564Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.156811ms grafana | logger=migrator t=2025-06-21T11:47:15.016740696Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-21T11:47:15.017624715Z level=info msg="Migration successfully executed" id="create user role table" duration=883.429µs grafana | logger=migrator t=2025-06-21T11:47:15.022651662Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-21T11:47:15.023808123Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.156481ms grafana | logger=migrator t=2025-06-21T11:47:15.026908602Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-21T11:47:15.028092764Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.183542ms grafana | logger=migrator t=2025-06-21T11:47:15.031346785Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-21T11:47:15.032603086Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.255991ms grafana | logger=migrator t=2025-06-21T11:47:15.036573454Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-21T11:47:15.039217849Z level=info msg="Migration successfully executed" id="create builtin role table" duration=2.647285ms grafana | logger=migrator t=2025-06-21T11:47:15.045098245Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-21T11:47:15.046223425Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.12545ms grafana | logger=migrator t=2025-06-21T11:47:15.049281154Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-21T11:47:15.050387065Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.105781ms grafana | logger=migrator t=2025-06-21T11:47:15.055833956Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-21T11:47:15.063873912Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.040846ms grafana | logger=migrator t=2025-06-21T11:47:15.069497546Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-21T11:47:15.071240002Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.742176ms grafana | logger=migrator t=2025-06-21T11:47:15.099371858Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-21T11:47:15.101440878Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.0701ms grafana | logger=migrator t=2025-06-21T11:47:15.107127571Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:15.108337393Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.214532ms grafana | logger=migrator t=2025-06-21T11:47:15.113941416Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-21T11:47:15.115056836Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.11489ms grafana | logger=migrator t=2025-06-21T11:47:15.117956563Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-21T11:47:15.118820952Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=864.329µs grafana | logger=migrator t=2025-06-21T11:47:15.121453266Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-21T11:47:15.122611158Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.156441ms grafana | logger=migrator t=2025-06-21T11:47:15.128778185Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-21T11:47:15.136974253Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.195438ms grafana | logger=migrator t=2025-06-21T11:47:15.142551716Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-21T11:47:15.150473521Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.919085ms grafana | logger=migrator t=2025-06-21T11:47:15.154961483Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-21T11:47:15.160877219Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.915326ms grafana | logger=migrator t=2025-06-21T11:47:15.165478822Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-21T11:47:15.173820761Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.340959ms grafana | logger=migrator t=2025-06-21T11:47:15.176669388Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-21T11:47:15.177779689Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.110001ms grafana | logger=migrator t=2025-06-21T11:47:15.181392153Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-21T11:47:15.182480473Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.08786ms grafana | logger=migrator t=2025-06-21T11:47:15.188729072Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-21T11:47:15.190159716Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.429344ms grafana | logger=migrator t=2025-06-21T11:47:15.193832621Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-21T11:47:15.203872396Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=10.041065ms grafana | logger=migrator t=2025-06-21T11:47:15.207127776Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-21T11:47:15.208235767Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.10754ms grafana | logger=migrator t=2025-06-21T11:47:15.21290077Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-21T11:47:15.21389095Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=989.97µs grafana | logger=migrator t=2025-06-21T11:47:15.217270252Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-21T11:47:15.218465643Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.193431ms grafana | logger=migrator t=2025-06-21T11:47:15.221801615Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-21T11:47:15.223527071Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.725316ms grafana | logger=migrator t=2025-06-21T11:47:15.229170374Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-21T11:47:15.229200805Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=31.671µs grafana | logger=migrator t=2025-06-21T11:47:15.233850588Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-21T11:47:15.234950759Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.100321ms grafana | logger=migrator t=2025-06-21T11:47:15.238142089Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-21T11:47:15.238189709Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=50.36µs grafana | logger=migrator t=2025-06-21T11:47:15.242791393Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-21T11:47:15.243432799Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=641.506µs grafana | logger=migrator t=2025-06-21T11:47:15.246679229Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-21T11:47:15.247515718Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=837.429µs grafana | logger=migrator t=2025-06-21T11:47:15.251907129Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-21T11:47:15.252592186Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=684.817µs grafana | logger=migrator t=2025-06-21T11:47:15.255723425Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-21T11:47:15.255927087Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=203.842µs grafana | logger=migrator t=2025-06-21T11:47:15.260992596Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-21T11:47:15.261685412Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=689.766µs grafana | logger=migrator t=2025-06-21T11:47:15.309637694Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-21T11:47:15.311284791Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.645927ms grafana | logger=migrator t=2025-06-21T11:47:15.315184647Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-21T11:47:15.316269507Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.08454ms grafana | logger=migrator t=2025-06-21T11:47:15.320999592Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-21T11:47:15.329556043Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.558341ms grafana | logger=migrator t=2025-06-21T11:47:15.33459731Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-21T11:47:15.334621661Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=30.621µs grafana | logger=migrator t=2025-06-21T11:47:15.338215415Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-21T11:47:15.339228264Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.012839ms grafana | logger=migrator t=2025-06-21T11:47:15.342581556Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-21T11:47:15.343956839Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.369733ms grafana | logger=migrator t=2025-06-21T11:47:15.348886205Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-21T11:47:15.350611782Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.723777ms grafana | logger=migrator t=2025-06-21T11:47:15.354196165Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-21T11:47:15.362612455Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.41579ms grafana | logger=migrator t=2025-06-21T11:47:15.369644552Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.370671952Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.027441ms grafana | logger=migrator t=2025-06-21T11:47:15.377285344Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.378906119Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.626005ms grafana | logger=migrator t=2025-06-21T11:47:15.382671765Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:15.406076576Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.405431ms grafana | logger=migrator t=2025-06-21T11:47:15.411139754Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-21T11:47:15.411925231Z level=info msg="Migration successfully executed" id="create correlation v2" duration=785.507µs grafana | logger=migrator t=2025-06-21T11:47:15.41494498Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.415708957Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=763.317µs grafana | logger=migrator t=2025-06-21T11:47:15.418907547Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.420700245Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.791278ms grafana | logger=migrator t=2025-06-21T11:47:15.428364607Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-21T11:47:15.430186444Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.822127ms grafana | logger=migrator t=2025-06-21T11:47:15.437171449Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:15.437564624Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=395.905µs grafana | logger=migrator t=2025-06-21T11:47:15.440844504Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:15.441703093Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=858.229µs grafana | logger=migrator t=2025-06-21T11:47:15.445570479Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-21T11:47:15.454889887Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.318878ms grafana | logger=migrator t=2025-06-21T11:47:15.460824283Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-21T11:47:15.469206802Z level=info msg="Migration successfully executed" id="add type column" duration=8.385559ms grafana | logger=migrator t=2025-06-21T11:47:15.496794673Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-21T11:47:15.498126616Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.332043ms grafana | logger=migrator t=2025-06-21T11:47:15.502080084Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-21T11:47:15.503681099Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.604915ms grafana | logger=migrator t=2025-06-21T11:47:15.511651364Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.512665153Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.516901983Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.517712191Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.522353015Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-21T11:47:15.523308584Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=954.8µs grafana | logger=migrator t=2025-06-21T11:47:15.52925962Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-21T11:47:15.530985086Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.724806ms grafana | logger=migrator t=2025-06-21T11:47:15.536142475Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.537667919Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.526154ms grafana | logger=migrator t=2025-06-21T11:47:15.543509854Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-21T11:47:15.544654476Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.144142ms grafana | logger=migrator t=2025-06-21T11:47:15.549677483Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.550735793Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.05792ms grafana | logger=migrator t=2025-06-21T11:47:15.556316566Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.557962772Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.646136ms grafana | logger=migrator t=2025-06-21T11:47:15.56319851Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-21T11:47:15.564106019Z level=info msg="Migration successfully executed" id="Drop public config table" duration=907.169µs grafana | logger=migrator t=2025-06-21T11:47:15.569542301Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-21T11:47:15.570837423Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.294672ms grafana | logger=migrator t=2025-06-21T11:47:15.575230205Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.576415936Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.165661ms grafana | logger=migrator t=2025-06-21T11:47:15.582933088Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:15.584861845Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.928177ms grafana | logger=migrator t=2025-06-21T11:47:15.590953723Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-21T11:47:15.592017563Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.06373ms grafana | logger=migrator t=2025-06-21T11:47:15.595467745Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-21T11:47:15.618285812Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.817647ms grafana | logger=migrator t=2025-06-21T11:47:15.623338069Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-21T11:47:15.629566538Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.228049ms grafana | logger=migrator t=2025-06-21T11:47:15.634958589Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-21T11:47:15.643847573Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.888284ms grafana | logger=migrator t=2025-06-21T11:47:15.647590639Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-21T11:47:15.647885901Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=295.142µs grafana | logger=migrator t=2025-06-21T11:47:15.651584946Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-21T11:47:15.660377319Z level=info msg="Migration successfully executed" id="add share column" duration=8.794013ms grafana | logger=migrator t=2025-06-21T11:47:15.671713316Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-21T11:47:15.672011019Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=298.243µs grafana | logger=migrator t=2025-06-21T11:47:15.676918196Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-21T11:47:15.67838562Z level=info msg="Migration successfully executed" id="create file table" duration=1.463103ms grafana | logger=migrator t=2025-06-21T11:47:15.681837992Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-21T11:47:15.683020673Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.182321ms grafana | logger=migrator t=2025-06-21T11:47:15.687311914Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-21T11:47:15.688660206Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.346392ms grafana | logger=migrator t=2025-06-21T11:47:15.695060047Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-21T11:47:15.696318469Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.258532ms grafana | logger=migrator t=2025-06-21T11:47:15.700046494Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-21T11:47:15.701129924Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.08304ms grafana | logger=migrator t=2025-06-21T11:47:15.704785579Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-21T11:47:15.704802729Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.45µs grafana | logger=migrator t=2025-06-21T11:47:15.70912961Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-21T11:47:15.70915403Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=25.19µs grafana | logger=migrator t=2025-06-21T11:47:15.714618202Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-21T11:47:15.715417469Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=798.987µs grafana | logger=migrator t=2025-06-21T11:47:15.719438397Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-21T11:47:15.71975649Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=318.033µs grafana | logger=migrator t=2025-06-21T11:47:15.723302444Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-21T11:47:15.725300942Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.994318ms grafana | logger=migrator t=2025-06-21T11:47:15.729662534Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-21T11:47:15.738476397Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.813223ms grafana | logger=migrator t=2025-06-21T11:47:15.742768488Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-21T11:47:15.74296335Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=191.762µs grafana | logger=migrator t=2025-06-21T11:47:15.746257801Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-21T11:47:15.747442502Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.184271ms grafana | logger=migrator t=2025-06-21T11:47:15.752000925Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-21T11:47:15.752701002Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=699.757µs grafana | logger=migrator t=2025-06-21T11:47:15.756188514Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-21T11:47:15.757297555Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=1.105191ms grafana | logger=migrator t=2025-06-21T11:47:15.763936808Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-21T11:47:15.764838336Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=901.208µs grafana | logger=migrator t=2025-06-21T11:47:15.770293067Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-21T11:47:15.779808427Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.51666ms grafana | logger=migrator t=2025-06-21T11:47:15.783829116Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-21T11:47:15.791553759Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.723683ms grafana | logger=migrator t=2025-06-21T11:47:15.796287373Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-21T11:47:15.797348754Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.060921ms grafana | logger=migrator t=2025-06-21T11:47:15.801748905Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-21T11:47:15.877750503Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.993238ms grafana | logger=migrator t=2025-06-21T11:47:15.882550089Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-21T11:47:15.883476307Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=926.128µs grafana | logger=migrator t=2025-06-21T11:47:15.887125632Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-21T11:47:15.888315204Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.189252ms grafana | logger=migrator t=2025-06-21T11:47:15.892089138Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-21T11:47:15.920932821Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.843863ms grafana | logger=migrator t=2025-06-21T11:47:15.926491534Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-21T11:47:15.933743102Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.247348ms grafana | logger=migrator t=2025-06-21T11:47:15.937642079Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-21T11:47:15.937994022Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=351.823µs grafana | logger=migrator t=2025-06-21T11:47:15.942738728Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-21T11:47:15.94301115Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=271.312µs grafana | logger=migrator t=2025-06-21T11:47:15.948426992Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-21T11:47:15.948673714Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=246.132µs grafana | logger=migrator t=2025-06-21T11:47:15.95253714Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-21T11:47:15.952929074Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=390.544µs grafana | logger=migrator t=2025-06-21T11:47:15.957004503Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-21T11:47:15.957431246Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=425.653µs grafana | logger=migrator t=2025-06-21T11:47:15.962507895Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-21T11:47:15.963999869Z level=info msg="Migration successfully executed" id="create folder table" duration=1.491424ms grafana | logger=migrator t=2025-06-21T11:47:15.967938256Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-21T11:47:15.969204027Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.266681ms grafana | logger=migrator t=2025-06-21T11:47:15.974191465Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-21T11:47:15.976095492Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.902457ms grafana | logger=migrator t=2025-06-21T11:47:15.98113986Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-21T11:47:15.981178601Z level=info msg="Migration successfully executed" id="Update folder title length" duration=39.301µs grafana | logger=migrator t=2025-06-21T11:47:15.984750585Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-21T11:47:15.985836765Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.0855ms grafana | logger=migrator t=2025-06-21T11:47:15.990858262Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-21T11:47:15.992422427Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.533665ms grafana | logger=migrator t=2025-06-21T11:47:15.997536886Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-21T11:47:15.999359303Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.820737ms grafana | logger=migrator t=2025-06-21T11:47:16.003932005Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-21T11:47:16.004491371Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=560.786µs grafana | logger=migrator t=2025-06-21T11:47:16.008265958Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-21T11:47:16.008538772Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=272.813µs grafana | logger=migrator t=2025-06-21T11:47:16.011934684Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-21T11:47:16.012990954Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.05446ms grafana | logger=migrator t=2025-06-21T11:47:16.017628379Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-21T11:47:16.019400576Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.771167ms grafana | logger=migrator t=2025-06-21T11:47:16.023322894Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-21T11:47:16.02507327Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.749106ms grafana | logger=migrator t=2025-06-21T11:47:16.028383593Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-21T11:47:16.029715676Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.331453ms grafana | logger=migrator t=2025-06-21T11:47:16.055938759Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-21T11:47:16.057738977Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.800558ms grafana | logger=migrator t=2025-06-21T11:47:16.061460052Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-21T11:47:16.062768305Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.309473ms grafana | logger=migrator t=2025-06-21T11:47:16.067618642Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-21T11:47:16.06849576Z level=info msg="Migration successfully executed" id="create anon_device table" duration=876.478µs grafana | logger=migrator t=2025-06-21T11:47:16.071959184Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-21T11:47:16.073105206Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.143842ms grafana | logger=migrator t=2025-06-21T11:47:16.076382417Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-21T11:47:16.077477658Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.094441ms grafana | logger=migrator t=2025-06-21T11:47:16.081644278Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-21T11:47:16.082629067Z level=info msg="Migration successfully executed" id="create signing_key table" duration=984.369µs grafana | logger=migrator t=2025-06-21T11:47:16.087769587Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-21T11:47:16.088923319Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.151972ms grafana | logger=migrator t=2025-06-21T11:47:16.092741216Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-21T11:47:16.094280571Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.544525ms grafana | logger=migrator t=2025-06-21T11:47:16.098631683Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-21T11:47:16.099045486Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=414.083µs grafana | logger=migrator t=2025-06-21T11:47:16.102914324Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-21T11:47:16.117850239Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=14.933945ms grafana | logger=migrator t=2025-06-21T11:47:16.120802208Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-21T11:47:16.121480184Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=678.806µs grafana | logger=migrator t=2025-06-21T11:47:16.125623354Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-21T11:47:16.125673744Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=51.1µs grafana | logger=migrator t=2025-06-21T11:47:16.127835896Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-21T11:47:16.128996447Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.159611ms grafana | logger=migrator t=2025-06-21T11:47:16.131948936Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-21T11:47:16.132016076Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=65.24µs grafana | logger=migrator t=2025-06-21T11:47:16.134841714Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-21T11:47:16.136194597Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.352553ms grafana | logger=migrator t=2025-06-21T11:47:16.141061304Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-21T11:47:16.142329786Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.267942ms grafana | logger=migrator t=2025-06-21T11:47:16.145414656Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-21T11:47:16.147600087Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.200012ms grafana | logger=migrator t=2025-06-21T11:47:16.151343934Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-21T11:47:16.153150171Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.805967ms grafana | logger=migrator t=2025-06-21T11:47:16.159156879Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-21T11:47:16.159779136Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=621.537µs grafana | logger=migrator t=2025-06-21T11:47:16.162770374Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-21T11:47:16.163047747Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=278.113µs grafana | logger=migrator t=2025-06-21T11:47:16.166045486Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-21T11:47:16.167137707Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.092181ms grafana | logger=migrator t=2025-06-21T11:47:16.171487859Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-21T11:47:16.173090104Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.602315ms grafana | logger=migrator t=2025-06-21T11:47:16.176285255Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-21T11:47:16.177358235Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=1.07234ms grafana | logger=migrator t=2025-06-21T11:47:16.182221273Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-21T11:47:16.192152339Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.929956ms grafana | logger=migrator t=2025-06-21T11:47:16.19634026Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-21T11:47:16.205828881Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.486451ms grafana | logger=migrator t=2025-06-21T11:47:16.209266204Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-21T11:47:16.217025499Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=7.759515ms grafana | logger=migrator t=2025-06-21T11:47:16.248175152Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-21T11:47:16.261933855Z level=info msg="Migration successfully executed" id="add migration uid column" duration=13.761143ms grafana | logger=migrator t=2025-06-21T11:47:16.268174645Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-21T11:47:16.268540479Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=365.194µs grafana | logger=migrator t=2025-06-21T11:47:16.272976141Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-21T11:47:16.274636768Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.660577ms grafana | logger=migrator t=2025-06-21T11:47:16.277871719Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-21T11:47:16.287261421Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.388922ms grafana | logger=migrator t=2025-06-21T11:47:16.293988615Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-21T11:47:16.294579621Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=590.556µs grafana | logger=migrator t=2025-06-21T11:47:16.299080095Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-21T11:47:16.301289706Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.209541ms grafana | logger=migrator t=2025-06-21T11:47:16.305046832Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:16.329132055Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.085073ms grafana | logger=migrator t=2025-06-21T11:47:16.33270064Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-21T11:47:16.334693839Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=1.994059ms grafana | logger=migrator t=2025-06-21T11:47:16.339823219Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:16.341229133Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=1.404994ms grafana | logger=migrator t=2025-06-21T11:47:16.344508914Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:16.344970439Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=460.485µs grafana | logger=migrator t=2025-06-21T11:47:16.348744916Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:16.349689195Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=943.879µs grafana | logger=migrator t=2025-06-21T11:47:16.354010667Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-21T11:47:16.380495013Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=26.483606ms grafana | logger=migrator t=2025-06-21T11:47:16.383500422Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-21T11:47:16.384374791Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=873.979µs grafana | logger=migrator t=2025-06-21T11:47:16.388880444Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-21T11:47:16.389785324Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=904.62µs grafana | logger=migrator t=2025-06-21T11:47:16.392646921Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-21T11:47:16.393114505Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=463.794µs grafana | logger=migrator t=2025-06-21T11:47:16.395922573Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-21T11:47:16.396901713Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=978.42µs grafana | logger=migrator t=2025-06-21T11:47:16.401502877Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-21T11:47:16.412272102Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=10.768385ms grafana | logger=migrator t=2025-06-21T11:47:16.434540167Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-21T11:47:16.446876387Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=12.33753ms grafana | logger=migrator t=2025-06-21T11:47:16.45028076Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-21T11:47:16.457326968Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=7.045878ms grafana | logger=migrator t=2025-06-21T11:47:16.462956272Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-21T11:47:16.471174882Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=8.21514ms grafana | logger=migrator t=2025-06-21T11:47:16.477125029Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-21T11:47:16.483954436Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=6.828667ms grafana | logger=migrator t=2025-06-21T11:47:16.488429449Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-21T11:47:16.495813361Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=7.382632ms grafana | logger=migrator t=2025-06-21T11:47:16.500550887Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-21T11:47:16.501604697Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.05369ms grafana | logger=migrator t=2025-06-21T11:47:16.50603966Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-21T11:47:16.541079649Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=35.039029ms grafana | logger=migrator t=2025-06-21T11:47:16.544589843Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-21T11:47:16.5515149Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=6.923117ms grafana | logger=migrator t=2025-06-21T11:47:16.55767387Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-21T11:47:16.567374315Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.698084ms grafana | logger=migrator t=2025-06-21T11:47:16.571346832Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-21T11:47:16.580874945Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=9.527343ms grafana | logger=migrator t=2025-06-21T11:47:16.585983684Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-21T11:47:16.595292095Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.308231ms grafana | logger=migrator t=2025-06-21T11:47:16.63712264Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-21T11:47:16.637357063Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=235.023µs grafana | logger=migrator t=2025-06-21T11:47:16.644475841Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-21T11:47:16.644548562Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=74.501µs grafana | logger=migrator t=2025-06-21T11:47:16.648251947Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:16.65785252Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.599513ms grafana | logger=migrator t=2025-06-21T11:47:16.66393847Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.674860725Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.921795ms grafana | logger=migrator t=2025-06-21T11:47:16.68153238Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-21T11:47:16.681949434Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=416.714µs grafana | logger=migrator t=2025-06-21T11:47:16.686210406Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-21T11:47:16.686799041Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=588.275µs grafana | logger=migrator t=2025-06-21T11:47:16.690941201Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:16.70119392Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=10.251909ms grafana | logger=migrator t=2025-06-21T11:47:16.705740384Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.713806913Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.065399ms grafana | logger=migrator t=2025-06-21T11:47:16.717844132Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-21T11:47:16.729710817Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=11.901906ms grafana | logger=migrator t=2025-06-21T11:47:16.734666455Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-21T11:47:16.742961535Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=8.2889ms grafana | logger=migrator t=2025-06-21T11:47:16.746674132Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-21T11:47:16.747352978Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=678.786µs grafana | logger=migrator t=2025-06-21T11:47:16.75060694Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:16.760141542Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.534232ms grafana | logger=migrator t=2025-06-21T11:47:16.767632795Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.779455829Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=11.819284ms grafana | logger=migrator t=2025-06-21T11:47:16.783249736Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-21T11:47:16.783493428Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=243.232µs grafana | logger=migrator t=2025-06-21T11:47:16.786745179Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-21T11:47:16.787155414Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=413.165µs grafana | logger=migrator t=2025-06-21T11:47:16.790442116Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-21T11:47:16.791526126Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.08082ms grafana | logger=migrator t=2025-06-21T11:47:16.79604251Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-21T11:47:16.79606499Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=22.62µs grafana | logger=migrator t=2025-06-21T11:47:16.849312836Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-21T11:47:16.849340136Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=29.34µs grafana | logger=migrator t=2025-06-21T11:47:16.853208744Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-21T11:47:16.853810439Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=601.325µs grafana | logger=migrator t=2025-06-21T11:47:16.858595326Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.870604692Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=12.009196ms grafana | logger=migrator t=2025-06-21T11:47:16.874013376Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:16.881364016Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.34906ms grafana | logger=migrator t=2025-06-21T11:47:16.884596107Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-21T11:47:16.885720828Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.122031ms grafana | logger=migrator t=2025-06-21T11:47:16.892223282Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-21T11:47:16.894911038Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.685476ms grafana | logger=migrator t=2025-06-21T11:47:16.901081878Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-21T11:47:16.9137299Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=12.651092ms grafana | logger=migrator t=2025-06-21T11:47:16.918374005Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.929529533Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=11.153118ms grafana | logger=migrator t=2025-06-21T11:47:16.933527172Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-21T11:47:16.933550282Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-21T11:47:16.933844394Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-21T11:47:16.933867815Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=340.233µs grafana | logger=migrator t=2025-06-21T11:47:16.936988026Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-21T11:47:16.93753479Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=546.414µs grafana | logger=migrator t=2025-06-21T11:47:16.941080045Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-21T11:47:16.943131765Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.05306ms grafana | logger=migrator t=2025-06-21T11:47:16.946951902Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-21T11:47:16.949100582Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.15153ms grafana | logger=migrator t=2025-06-21T11:47:16.953693968Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-21T11:47:16.954967989Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.273691ms grafana | logger=migrator t=2025-06-21T11:47:16.959917687Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-21T11:47:16.96216834Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.250373ms grafana | logger=migrator t=2025-06-21T11:47:16.965598822Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:16.975978023Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=10.378721ms grafana | logger=migrator t=2025-06-21T11:47:16.979917731Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:16.989741296Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=9.822995ms grafana | logger=migrator t=2025-06-21T11:47:16.994909666Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-21T11:47:17.007168345Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=12.255579ms grafana | logger=migrator t=2025-06-21T11:47:17.03037404Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-21T11:47:17.042419717Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=12.045927ms grafana | logger=migrator t=2025-06-21T11:47:17.04584135Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-21T11:47:17.046177554Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-21T11:47:17.046191724Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=355.754µs grafana | logger=migrator t=2025-06-21T11:47:17.051217053Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-21T11:47:17.052115522Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=898.309µs grafana | logger=migrator t=2025-06-21T11:47:17.057807557Z level=info msg="migrations completed" performed=654 skipped=0 duration=6.644327725s grafana | logger=migrator t=2025-06-21T11:47:17.058909238Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-21T11:47:17.078851291Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-21T11:47:17.079110363Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-21T11:47:17.083927411Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-21T11:47:17.166013647Z level=info msg="Restored cache from database" duration=443.874µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.176006764Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-21T11:47:17.176056214Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-21T11:47:17.191536205Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-21T11:47:17.192382433Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=845.808µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.205736843Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-21T11:47:17.205764244Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=28.451µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.210719901Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-21T11:47:17.210880733Z level=info msg="Migration successfully executed" id="drop table resource" duration=161.292µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.21572413Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-21T11:47:17.217464817Z level=info msg="Migration successfully executed" id="create table resource" duration=1.740167ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.220778769Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-21T11:47:17.222014701Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.235692ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.225078081Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.225187402Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=109.971µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.229489134Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.230643124Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.15362ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.235440752Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-21T11:47:17.237492231Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.050779ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.241132957Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-21T11:47:17.242994875Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.861527ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.247756541Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-21T11:47:17.247927883Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=172.132µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.251996393Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-21T11:47:17.253439907Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.443694ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.258824979Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-21T11:47:17.260708337Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.883178ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.265474873Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-21T11:47:17.265569334Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=86.511µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.268795556Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-21T11:47:17.269921956Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.12606ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.273249929Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-21T11:47:17.2744643Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.214231ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.280417048Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-21T11:47:17.281752751Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.333373ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.287273595Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.300886647Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.615652ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.306311789Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-21T11:47:17.315415128Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.102689ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.318876691Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-21T11:47:17.320201444Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.324763ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.32383106Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-21T11:47:17.325185133Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.353573ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.329874288Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.340123948Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.24844ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.343346079Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-21T11:47:17.35265872Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.313301ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.355721829Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-21T11:47:17.355742779Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-21T11:47:17.356179033Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=457.104µs grafana | logger=resource-migrator t=2025-06-21T11:47:17.359273963Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-21T11:47:17.360429835Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=1.155272ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.390111453Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.402306341Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=12.195938ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.406111179Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-21T11:47:17.407455002Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.342963ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.412006356Z level=info msg="migrations completed" performed=26 skipped=0 duration=220.541022ms grafana | logger=resource-migrator t=2025-06-21T11:47:17.412733352Z level=info msg="Unlocking database" grafana | t=2025-06-21T11:47:17.413075016Z level=info caller=logger.go:214 time=2025-06-21T11:47:17.413056576Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-21T11:47:17.425033012Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-21T11:47:17.468486414Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-21T11:47:17.468514324Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-21T11:47:17.468593795Z level=info msg="Plugins loaded" count=53 duration=43.561693ms grafana | logger=query_data t=2025-06-21T11:47:17.473902606Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-21T11:47:17.478844505Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-21T11:47:17.493959201Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-21T11:47:17.502691306Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-21T11:47:17.502721797Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-21T11:47:17.506867767Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-21T11:47:17.50721869Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-21T11:47:17.507405372Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=grafanaStorageLogger t=2025-06-21T11:47:17.507666464Z level=info msg="Storage starting" grafana | logger=http.server t=2025-06-21T11:47:17.509568323Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:17.516305528Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugins.update.checker t=2025-06-21T11:47:17.608438103Z level=info msg="Update check succeeded" duration=100.845769ms grafana | logger=grafana.update.checker t=2025-06-21T11:47:17.608466303Z level=info msg="Update check succeeded" duration=100.734878ms grafana | logger=ngalert.state.manager t=2025-06-21T11:47:17.665265265Z level=info msg="State cache has been initialized" states=0 duration=158.045115ms grafana | logger=ngalert.scheduler t=2025-06-21T11:47:17.665337795Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-21T11:47:17.665436356Z level=info msg=starting first_tick=2025-06-21T11:47:20Z grafana | logger=provisioning.datasources t=2025-06-21T11:47:17.67098796Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-21T11:47:17.682659563Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=provisioning.alerting t=2025-06-21T11:47:17.690023934Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-21T11:47:17.690054956Z level=info msg="finished to provision alerting" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-21T11:47:17.704086362Z level=info msg="Patterns update finished" duration=100.601387ms grafana | logger=provisioning.dashboard t=2025-06-21T11:47:17.709960169Z level=info msg="starting to provision dashboards" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.053925047Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.056655324Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.057192729Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.057639523Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.058123498Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.058890926Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.060309309Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.062361969Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-21T11:47:18.063129037Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-21T11:47:18.107863531Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-21T11:47:18.276354885Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-21T11:47:18.351970259Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-21T11:47:18.386995959Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:18.387026789Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=870.697271ms grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:18.387049049Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-21T11:47:18.646637119Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-21T11:47:18.847232185Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-21T11:47:18.991282753Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-21T11:47:19.015354416Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:19.015378456Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=628.324657ms grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:19.015397847Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-21T11:47:19.229254471Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-21T11:47:19.281129024Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-21T11:47:19.297074418Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:19.297094538Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=281.692351ms grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:19.297116588Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=plugin.installer t=2025-06-21T11:47:19.534143938Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-21T11:47:19.594530513Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-21T11:47:19.610492348Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-21T11:47:19.610532258Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=313.41153ms grafana | logger=infra.usagestats t=2025-06-21T11:48:49.516463351Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-21 11:47:11,456] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,457] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,460] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:11,463] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-21 11:47:11,467] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-21 11:47:11,473] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:11,492] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:11,493] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:11,500] INFO Socket connection established, initiating session, client: /172.17.0.5:50654, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:11,532] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000293a20000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:11,648] INFO Session: 0x100000293a20000 closed (org.apache.zookeeper.ZooKeeper) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-21 11:47:12,353] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-21 11:47:12,639] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-21 11:47:12,722] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-21 11:47:12,723] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-21 11:47:12,723] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-21 11:47:12,736] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-21 11:47:12,740] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,740] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,740] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,740] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,741] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,742] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,742] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,742] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,742] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,742] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,744] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-21 11:47:12,747] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-21 11:47:12,753] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:12,755] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-21 11:47:12,759] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:12,766] INFO Socket connection established, initiating session, client: /172.17.0.5:41310, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:12,777] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000293a20001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-21 11:47:12,780] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-21 11:47:13,157] INFO Cluster ID = yLVGLqvHTV69UyVchcNcSA (kafka.server.KafkaServer) kafka | [2025-06-21 11:47:13,162] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-21 11:47:13,210] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-21 11:47:13,252] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-21 11:47:13,260] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-21 11:47:13,256] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-21 11:47:13,265] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-21 11:47:13,304] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-21 11:47:13,307] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-21 11:47:13,321] INFO Loaded 0 logs in 17ms. (kafka.log.LogManager) kafka | [2025-06-21 11:47:13,321] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-21 11:47:13,322] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-21 11:47:13,334] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-21 11:47:13,385] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-21 11:47:13,408] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-21 11:47:13,424] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-21 11:47:13,467] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-21 11:47:13,812] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-21 11:47:13,815] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-21 11:47:13,839] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-21 11:47:13,840] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-21 11:47:13,840] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-21 11:47:13,846] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-21 11:47:13,851] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-21 11:47:13,879] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:13,880] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:13,882] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:13,883] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:13,897] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-21 11:47:13,924] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-21 11:47:13,973] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750506433940,1750506433940,1,0,0,72057605104730113,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-21 11:47:13,974] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-21 11:47:14,106] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-21 11:47:14,113] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:14,117] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:14,121] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:14,133] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:14,183] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-21 11:47:14,189] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:14,203] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,208] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,219] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-21 11:47:14,228] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-21 11:47:14,232] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-21 11:47:14,233] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-21 11:47:14,268] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-21 11:47:14,268] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,269] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-21 11:47:14,273] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,276] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,278] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,292] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-21 11:47:14,297] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,303] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,305] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-21 11:47:14,311] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-21 11:47:14,319] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-21 11:47:14,319] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-21 11:47:14,319] INFO Kafka startTimeMs: 1750506434312 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-21 11:47:14,320] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-21 11:47:14,329] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,329] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,329] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,330] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,330] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-21 11:47:14,333] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,333] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,334] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,334] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-21 11:47:14,335] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,338] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-21 11:47:14,345] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-21 11:47:14,346] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-21 11:47:14,350] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-21 11:47:14,350] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-21 11:47:14,350] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-21 11:47:14,353] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-21 11:47:14,355] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-21 11:47:14,355] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,366] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,366] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,367] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,367] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,369] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,373] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-21 11:47:14,385] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:14,451] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-21 11:47:14,462] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-21 11:47:14,498] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-21 11:47:19,386] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:19,387] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:50,532] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-21 11:47:50,532] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:50,537] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-21 11:47:50,543] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:50,577] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(22FWG7enSWm5K8DWjErjAg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Ewzdg6XXTmWipsWf2hDY-g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:50,578] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,583] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,586] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:47:50,587] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,592] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,593] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,594] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,595] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,596] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,597] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,598] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:47:50,598] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,788] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,789] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:47:50,791] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-21 11:47:50,792] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-21 11:47:50,793] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-21 11:47:50,794] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-21 11:47:50,794] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-21 11:47:50,797] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:47:50,799] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,805] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,806] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,807] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,808] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,809] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,810] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,810] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-21 11:47:50,844] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-21 11:47:50,845] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-21 11:47:50,846] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-21 11:47:50,847] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-21 11:47:50,848] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-21 11:47:50,895] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:50,907] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:50,908] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,909] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,911] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:50,930] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:50,931] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:50,931] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,931] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,931] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:50,966] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:50,967] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:50,967] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,967] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,967] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:50,976] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:50,977] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:50,977] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,977] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,977] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:50,989] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:50,990] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:50,990] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,990] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:50,990] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:50,999] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,000] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,000] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,000] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,000] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,013] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,015] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,015] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,016] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,016] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,025] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,026] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,026] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,026] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,026] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,034] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,035] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,035] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,035] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,035] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,043] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,043] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,043] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,044] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,044] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,050] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,051] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,051] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,051] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,051] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,058] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,059] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,059] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,059] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,059] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,067] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,068] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,068] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,068] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,068] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,074] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,075] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,075] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,075] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,075] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,083] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,083] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,084] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,084] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,084] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,093] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,094] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,094] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,094] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,095] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,101] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,102] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,102] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,102] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,102] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,109] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,109] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,109] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,109] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,110] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,116] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,117] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,117] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,117] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,117] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,149] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,149] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,149] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,149] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,149] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,158] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,159] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,159] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,159] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,159] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,167] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,168] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,168] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,168] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,168] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,176] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,176] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,176] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,177] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,177] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,182] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,183] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,183] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,183] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,183] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,191] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,191] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,191] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,191] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,191] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,199] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,199] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,199] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,199] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,200] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,207] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,208] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,208] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,208] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,208] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,216] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,217] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,217] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,217] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,217] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,224] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,224] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,224] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,225] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,225] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,231] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,232] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,232] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,232] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,232] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,240] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,240] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,240] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,241] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,241] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,248] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,248] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,248] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,248] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,248] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,255] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,256] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,256] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,256] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,256] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,273] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,274] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,274] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,274] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,274] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,281] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,282] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,282] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,282] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,282] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(22FWG7enSWm5K8DWjErjAg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,288] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,288] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,288] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,288] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,288] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,296] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,297] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,297] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,297] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,297] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,305] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,306] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,306] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,306] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,306] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,325] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,326] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,326] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,326] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,327] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,333] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,334] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,334] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,334] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,334] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,342] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,342] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,342] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,342] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,342] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,350] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,350] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,350] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,350] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,350] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,357] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,358] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,358] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,358] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,358] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,363] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,364] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,364] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,364] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,364] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,374] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,375] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,375] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,375] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,376] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,382] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,383] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,383] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,383] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,383] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,392] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,394] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,394] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,394] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,394] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,400] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,401] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,401] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,401] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,401] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,408] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,408] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,408] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,408] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,409] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,415] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,416] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,416] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,416] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,416] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,424] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:47:51,425] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-21 11:47:51,425] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,425] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:47:51,425] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Ewzdg6XXTmWipsWf2hDY-g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:47:51,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-21 11:47:51,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-21 11:47:51,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-21 11:47:51,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-21 11:47:51,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-21 11:47:51,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-21 11:47:51,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-21 11:47:51,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-21 11:47:51,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-21 11:47:51,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-21 11:47:51,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-21 11:47:51,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-21 11:47:51,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-21 11:47:51,445] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,447] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,454] INFO [Broker id=1] Finished LeaderAndIsr request in 649ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-21 11:47:51,459] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Ewzdg6XXTmWipsWf2hDY-g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=22FWG7enSWm5K8DWjErjAg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-21 11:47:51,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,464] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,464] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,464] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,464] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,465] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,465] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,465] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,465] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,465] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,466] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,466] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,466] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,466] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,467] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,467] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-21 11:47:51,468] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,468] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,469] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,470] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,470] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,470] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,470] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,470] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,471] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,471] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,471] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-21 11:47:51,539] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 in Empty state. Created a new member id consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:51,557] INFO [GroupCoordinator 1]: Preparing to rebalance group c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 in state PreparingRebalance with old generation 0 (__consumer_offsets-4) (reason: Adding new member consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8 with group instance id None; client reason: need to re-join with the given member-id: consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:52,328] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:52,331] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:54,571] INFO [GroupCoordinator 1]: Stabilized group c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 generation 1 (__consumer_offsets-4) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:54,600] INFO [GroupCoordinator 1]: Assignment received from leader consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8 for group c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:55,332] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:47:55,338] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:48:35,235] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-6bb66bc5-f4a0-4e3d-af56-0576b19fcc3f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:48:35,238] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-6bb66bc5-f4a0-4e3d-af56-0576b19fcc3f with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:48:38,239] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:48:38,242] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-6bb66bc5-f4a0-4e3d-af56-0576b19fcc3f for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:49:45,961] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-21 11:49:45,974] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(GtbEtQ7TRk2lqI9-d3wp5g),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-21 11:49:45,974] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-21 11:49:45,974] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-21 11:49:45,974] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,975] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-21 11:49:45,975] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,985] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-21 11:49:45,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-21 11:49:45,985] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-21 11:49:45,985] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,985] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-21 11:49:45,985] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,990] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-21 11:49:45,991] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-21 11:49:45,992] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-21 11:49:45,992] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-21 11:49:45,995] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-21 11:49:45,996] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-21 11:49:45,997] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-21 11:49:45,997] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-21 11:49:45,998] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(GtbEtQ7TRk2lqI9-d3wp5g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-21 11:49:46,003] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-21 11:49:46,004] INFO [Broker id=1] Finished LeaderAndIsr request in 14ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-21 11:49:46,004] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=GtbEtQ7TRk2lqI9-d3wp5g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-21 11:49:46,006] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-21 11:49:46,006] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-21 11:49:46,007] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-21 11:51:24,082] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-b9e97c7d-cc07-4237-90f9-6991a5da9945 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:24,084] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-b9e97c7d-cc07-4237-90f9-6991a5da9945 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:27,086] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:27,089] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-b9e97c7d-cc07-4237-90f9-6991a5da9945 for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:27,205] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-b9e97c7d-cc07-4237-90f9-6991a5da9945 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:27,206] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:27,207] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-b9e97c7d-cc07-4237-90f9-6991a5da9945, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:49,719] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-c121633a-8a2b-40ab-a849-f8f7d776a751 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:49,721] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-c121633a-8a2b-40ab-a849-f8f7d776a751 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:52,723] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:52,726] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-c121633a-8a2b-40ab-a849-f8f7d776a751 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:52,733] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-c121633a-8a2b-40ab-a849-f8f7d776a751 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:52,733] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:51:52,734] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-c121633a-8a2b-40ab-a849-f8f7d776a751, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:15,193] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-e69527d8-6a62-4c93-8dcc-3305b20cbd02 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:15,194] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-e69527d8-6a62-4c93-8dcc-3305b20cbd02 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:18,196] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:18,199] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-e69527d8-6a62-4c93-8dcc-3305b20cbd02 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:18,205] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-e69527d8-6a62-4c93-8dcc-3305b20cbd02 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:18,206] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:18,206] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-e69527d8-6a62-4c93-8dcc-3305b20cbd02, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-21 11:52:19,390] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-21 11:52:19,390] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-21 11:52:19,395] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2025-06-21 11:52:19,396] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-21T11:47:29.293+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-21T11:47:29.358+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-21T11:47:29.359+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-21T11:47:30.785+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-21T11:47:30.941+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 145 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-21T11:47:31.554+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-21T11:47:31.568+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-21T11:47:31.570+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-21T11:47:31.570+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-21T11:47:31.613+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-21T11:47:31.613+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2195 ms policy-api | [2025-06-21T11:47:31.924+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-21T11:47:32.012+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-21T11:47:32.060+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-21T11:47:32.410+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-21T11:47:32.443+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-21T11:47:32.638+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@6ba226cd policy-api | [2025-06-21T11:47:32.639+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-21T11:47:32.715+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-21T11:47:34.645+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-21T11:47:34.648+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-21T11:47:35.296+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-21T11:47:36.164+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-21T11:47:37.274+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-21T11:47:37.320+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-21T11:47:37.988+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-21T11:47:38.118+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-21T11:47:38.137+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-21T11:47:38.156+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.511 seconds (process running for 10.101) policy-api | [2025-06-21T11:47:39.917+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-21T11:47:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-21T11:47:39.919+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-21T11:51:02.901+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-api | [] policy-api | [2025-06-21T11:52:18.508+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity policy-api | policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.53021 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.583266 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.629608 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.681909 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.734017 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.792323 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.842837 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.917077 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:15.969889 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.021926 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.075123 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.130734 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.178725 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.226247 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.28927 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.343757 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.399237 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.452667 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.501461 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.552807 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.602923 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.685649 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.732327 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.782042 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.865153 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.919006 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:16.974797 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.046854 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.093194 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.145535 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.195103 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.253025 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.307115 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.35863 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.419063 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.475281 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.529215 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.599033 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.65171 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.711417 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.764718 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.818426 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.870324 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.922817 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:17.99033 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.046436 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.104201 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.188718 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.242654 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.297898 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.394878 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.442909 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.491304 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.549427 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.602673 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.663505 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.719943 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.769551 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.820679 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.913404 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:18.973487 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.026666 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.123507 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.191501 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.254607 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.314975 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.368967 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.425676 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.515218 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.570995 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.630359 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.694191 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.753972 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.81006 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.868434 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.921615 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:19.969431 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.021723 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.09182 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.145478 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.202237 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.31556 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.371065 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.427231 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.508503 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.563637 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.615149 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.697534 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.752449 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.813802 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.885465 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.933035 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:20.989058 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:21.047631 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:21.098467 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 2106251147150800u | 1 | 2025-06-21 11:47:21.148942 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.195897 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.275693 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.329818 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.381389 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.464839 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.520465 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.574746 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.649566 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.700814 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.757013 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.83325 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.886328 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 2106251147150900u | 1 | 2025-06-21 11:47:21.938828 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:21.994156 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.047199 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.096136 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.172039 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.223391 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.280095 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.357985 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.414349 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 2106251147151000u | 1 | 2025-06-21 11:47:22.463146 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 2106251147151100u | 1 | 2025-06-21 11:47:22.509582 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 2106251147151200u | 1 | 2025-06-21 11:47:22.587556 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 2106251147151200u | 1 | 2025-06-21 11:47:22.638823 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 2106251147151200u | 1 | 2025-06-21 11:47:22.693999 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 2106251147151200u | 1 | 2025-06-21 11:47:22.763424 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 2106251147151300u | 1 | 2025-06-21 11:47:22.816382 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 2106251147151300u | 1 | 2025-06-21 11:47:22.868522 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 2106251147151300u | 1 | 2025-06-21 11:47:22.914893 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.618058 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.672095 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.757041 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.816141 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.87408 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:23.944664 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.001517 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.058789 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.128498 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.185587 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.241805 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.295014 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 2106251147231400u | 1 | 2025-06-21 11:47:24.351118 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.406975 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.46229 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.521933 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.573373 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.654805 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.709826 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.758469 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 2106251147231500u | 1 | 2025-06-21 11:47:24.803646 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 2106251147231600u | 1 | 2025-06-21 11:47:24.856962 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 2106251147231600u | 1 | 2025-06-21 11:47:24.906445 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 2106251147231601u | 1 | 2025-06-21 11:47:24.953553 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 2106251147231601u | 1 | 2025-06-21 11:47:25.004594 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 2106251147231700u | 1 | 2025-06-21 11:47:25.062479 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 2106251147231700u | 1 | 2025-06-21 11:47:25.116284 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 2106251147231700u | 1 | 2025-06-21 11:47:25.167942 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.237545 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.292456 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.345639 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.442166 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.49506 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.549211 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.606893 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.666391 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 2106251147231701u | 1 | 2025-06-21 11:47:25.717091 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 2106251147261600u | 1 | 2025-06-21 11:47:26.364384 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 2106251147261600u | 1 | 2025-06-21 11:47:27.052292 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 2106251147261600u | 1 | 2025-06-21 11:47:27.140688 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-opa-pdp | Waiting for kafka port 9092... policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-opa-pdp | Waiting for pap port 6969... policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="OPA-PDP: Starting initialisation " policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="KAFKA_URL not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="PAP_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="PATCH_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="PATCH_GROUPID not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="API_USER not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="API_PASSWORD not defined, using default value" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="UseSASLForKAFKA not defined, using default value" policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="Username: " policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="Password: " policy-opa-pdp | time="2025-06-21T11:48:30Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" policy-opa-pdp | time="2025-06-21T11:48:30Z" level=debug msg="Configuration module: environment initialised" policy-opa-pdp | DEBU[2025-06-21T11:48:30.2027+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug policy-opa-pdp | DEBU[2025-06-21T11:48:30.2035+00:00] Name: opa-384c17a2-8037-4a68-95a5-eea37a9fe744 policy-opa-pdp | DEBU[2025-06-21T11:48:30.2066+00:00] Starting OPA PDP Service policy-opa-pdp | INFO[2025-06-21T11:48:35.2109+00:00] HTTP server started policy-opa-pdp | DEBU[2025-06-21T11:48:35.2119+00:00] Create an instance of OPA Object policy-opa-pdp | DEBU[2025-06-21T11:48:35.2120+00:00] Configure an instance of OPA Object policy-opa-pdp | DEBU[2025-06-21T11:48:35.2130+00:00] Topic start :::: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-21T11:48:35.2130+00:00] Creating Kafka Consumer singleton instance policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-21T11:48:35.2164+00:00] Topic Subscribed: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-21T11:48:35.2165+00:00] Created SIngleton consumer instance policy-opa-pdp | DEBU[2025-06-21T11:48:35.2286+00:00] Starting PDP Message Listener..... policy-opa-pdp | DEBU[2025-06-21T11:48:45.2308+00:00] New Ticker started with interval 60000 policy-opa-pdp | DEBU[2025-06-21T11:48:55.2399+00:00] After registration successful delay policy-opa-pdp | 2025/06/21 11:49:45 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:49:45.2546+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"f6afdd01-58e9-4e6c-adbe-6e11e98c335c","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750506585254","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:49:45.2547+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-21T11:49:45.2825+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"f6afdd01-58e9-4e6c-adbe-6e11e98c335c","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750506585254","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:49:45.2827+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:45.2827+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:45.8961+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"c9bc964d-b03e-4503-a19a-a42aaedded09","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:49:45.8962+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:49:45.8964+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"c9bc964d-b03e-4503-a19a-a42aaedded09","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:49:45.8964+00:00] Policy Is Allowed: slice.capacity.check policy-opa-pdp | DEBU[2025-06-21T11:49:45.8964+00:00] Validating properties data for policy: slice.capacity.check policy-opa-pdp | DEBU[2025-06-21T11:49:45.8965+00:00] Validating properties policy for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-21T11:49:45.8965+00:00] Validation successful for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-21T11:49:45.8967+00:00] Directory created: /opt/policies/slice/capacity/check policy-opa-pdp | INFO[2025-06-21T11:49:45.8968+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego policy-opa-pdp | INFO[2025-06-21T11:49:45.8969+00:00] Directory created: /opt/data/node/slice/capacity/check policy-opa-pdp | INFO[2025-06-21T11:49:45.8969+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json policy-opa-pdp | DEBU[2025-06-21T11:49:45.8969+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-21T11:49:45.9130+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-21T11:49:45.9151+00:00] storage not found creating : /node policy-opa-pdp | DEBU[2025-06-21T11:49:45.9151+00:00] storage not found creating : /node/slice policy-opa-pdp | DEBU[2025-06-21T11:49:45.9151+00:00] storage not found creating : /node/slice/capacity policy-opa-pdp | DEBU[2025-06-21T11:49:45.9151+00:00] storage not found creating : /node/slice/capacity/check policy-opa-pdp | INFO[2025-06-21T11:49:45.9152+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:49:45.9152+00:00] Loaded Policy: slice.capacity.check policy-opa-pdp | 2025/06/21 11:49:45 KafkaProducer or producer produce message policy-opa-pdp | INFO[2025-06-21T11:49:45.9152+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-21T11:49:45.9152+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-21T11:49:45.9153+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c9bc964d-b03e-4503-a19a-a42aaedded09","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"edde39c0-d5e6-4855-89ec-91e8ded44e9e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585915","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:49:45.9153+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:49:45.9153+00:00] 120000 policy-opa-pdp | DEBU[2025-06-21T11:49:45.9154+00:00] New Ticker started with interval 120000 policy-opa-pdp | DEBU[2025-06-21T11:49:45.9233+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c9bc964d-b03e-4503-a19a-a42aaedded09","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"edde39c0-d5e6-4855-89ec-91e8ded44e9e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585915","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:49:45.9234+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:45.9234+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:45.9601+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:49:45.9602+00:00] messageType: PDP_STATE_CHANGE policy-opa-pdp | DEBU[2025-06-21T11:49:45.9602+00:00] PDP STATE CHANGE message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:49:45.9603+00:00] State change from PASSIVE To : ACTIVE policy-opa-pdp | INFO[2025-06-21T11:49:45.9603+00:00] Sending PDP Status With State Change response policy-opa-pdp | 2025/06/21 11:49:45 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:49:45.9604+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"146f2f2a-8f26-431a-8f38-09108e29be91","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585960","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:49:45.9604+00:00] PDP_STATUS With State Change Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:49:45.9673+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"146f2f2a-8f26-431a-8f38-09108e29be91","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585960","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:49:45.9674+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:45.9674+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:46.2709+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","timestampMs":1750506586256,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:49:46.2710+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:49:46.2711+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","timestampMs":1750506586256,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-21T11:49:46.2712+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/21 11:49:46 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:49:46.2712+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"055b4de3-d337-4200-b30c-1e10959219ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506586271","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:49:46.2712+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:49:46.2712+00:00] 120000 policy-opa-pdp | DEBU[2025-06-21T11:49:46.2786+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"055b4de3-d337-4200-b30c-1e10959219ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506586271","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:49:46.2787+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:49:46.2787+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/21 11:50:45 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:50:45.2545+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"7192d6da-4f14-480e-b111-c9ccd8f7b49c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506645254","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:50:45.2551+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-21T11:50:45.2646+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"7192d6da-4f14-480e-b111-c9ccd8f7b49c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506645254","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:50:45.2647+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:50:45.2647+00:00] discarding event of type PDP_STATUS policy-opa-pdp | WARN[2025-06-21T11:51:02.6575+00:00] Invalid or Missing Request ID policy-opa-pdp | DEBU[2025-06-21T11:51:02.6576+00:00] Received Health Check message policy-opa-pdp | INFO[2025-06-21T11:51:02.6646+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:02.6647+00:00] datapath to get Data : / policy-opa-pdp | DEBU[2025-06-21T11:51:02.6648+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} policy-opa-pdp | DEBU[2025-06-21T11:51:03.9987+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","timestampMs":1750506663939,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:03.9987+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:51:03.9988+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","timestampMs":1750506663939,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:03.9989+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:51:03.9989+00:00] Policy is new and should be deployed: zoneB 1.0.6 policy-opa-pdp | DEBU[2025-06-21T11:51:03.9989+00:00] Policy Is Allowed: zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:03.9989+00:00] Validating properties data for policy: zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:03.9989+00:00] Validating properties policy for policy: zoneB policy-opa-pdp | INFO[2025-06-21T11:51:03.9989+00:00] Validation successful for policy: zoneB policy-opa-pdp | INFO[2025-06-21T11:51:03.9991+00:00] Directory created: /opt/policies/zoneB policy-opa-pdp | INFO[2025-06-21T11:51:03.9991+00:00] Policy file saved: /opt/policies/zoneB/policy.rego policy-opa-pdp | INFO[2025-06-21T11:51:03.9992+00:00] Directory created: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-21T11:51:03.9992+00:00] Data file saved: /opt/data/node/zoneB/data.json policy-opa-pdp | DEBU[2025-06-21T11:51:03.9992+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-21T11:51:04.0328+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-21T11:51:04.0368+00:00] storage not found creating : /node/zoneB policy-opa-pdp | INFO[2025-06-21T11:51:04.0369+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "zoneB", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:04.0369+00:00] Loaded Policy: zoneB policy-opa-pdp | INFO[2025-06-21T11:51:04.0370+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-21T11:51:04.0371+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/21 11:51:04 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:51:04.0372+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"9d0c3142-8a87-486b-ae32-59b6d9d8c238","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506664037","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:51:04.0372+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:51:04.0372+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:51:04.0461+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"9d0c3142-8a87-486b-ae32-59b6d9d8c238","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506664037","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:04.0462+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:04.0462+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-21T11:51:27.2292+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:27.2293+00:00] datapath to get Data : /node/zoneB/zone policy-opa-pdp | DEBU[2025-06-21T11:51:27.2293+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} policy-opa-pdp | DEBU[2025-06-21T11:51:27.2390+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:27.2392+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:27.2398+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:51:27.2400+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"ab41efe6-bd56-4bb7-b40d-4059964ad3b0","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1040,"timer_rego_query_compile_ns":141202,"timer_rego_query_eval_ns":568489,"timer_rego_query_parse_ns":144192,"timer_sdk_decision_eval_ns":1106417},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-21T11:51:27Z","timestamp":"2025-06-21T11:51:27.240249677Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:51:27.2424+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "ab41efe6-bd56-4bb7-b40d-4059964ad3b0", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:27.2493+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:27.2493+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:27.2496+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-21T11:51:27.2497+00:00] Policy Name zoeB does not exist policy-opa-pdp | DEBU[2025-06-21T11:51:27.2556+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:27.2556+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:27.2561+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:51:27.2563+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"375ae202-6e3f-4751-9bba-fef1e73582e9","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":940,"timer_rego_query_eval_ns":608299,"timer_sdk_decision_eval_ns":755901},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-21T11:51:27Z","timestamp":"2025-06-21T11:51:27.256470239Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:51:27.2574+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "375ae202-6e3f-4751-9bba-fef1e73582e9", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:27.5819+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d1299325-b9f9-4d29-b66a-68793b7dea43","timestampMs":1750506687543,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:27.5820+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:51:27.5822+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d1299325-b9f9-4d29-b66a-68793b7dea43","timestampMs":1750506687543,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-21T11:51:27.5822+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-21T11:51:27.5822+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-21T11:51:27.5823+00:00] Deleting Policy from OPA : /zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:27.5850+00:00] Removing policy directory: /opt/policies/zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:27.5853+00:00] Deleting data from OPA : /node/zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:27.5853+00:00] Analyzing dataPath: /node/zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:27.5854+00:00] Path segments: [ node zoneB] policy-opa-pdp | DEBU[2025-06-21T11:51:27.5855+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB policy-opa-pdp | DEBU[2025-06-21T11:51:27.5856+00:00] Removing data directory: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-21T11:51:27.5858+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:27.5859+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:51:27.5860+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-21T11:51:27.5862+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/21 11:51:27 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:51:27.5864+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d1299325-b9f9-4d29-b66a-68793b7dea43","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"624c5ea4-2dab-46c9-b9af-e2cc1385d952","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506687586","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:51:27.5865+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:51:27.5865+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:51:27.5937+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d1299325-b9f9-4d29-b66a-68793b7dea43","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"624c5ea4-2dab-46c9-b9af-e2cc1385d952","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506687586","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:27.5938+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:27.5939+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:28.6819+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68464c2b-48c1-409f-9dd8-77dcf323b352","timestampMs":1750506688660,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:28.6821+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:51:28.6823+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68464c2b-48c1-409f-9dd8-77dcf323b352","timestampMs":1750506688660,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:28.6823+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:51:28.6824+00:00] Policy is new and should be deployed: vehicle 1.0.6 policy-opa-pdp | DEBU[2025-06-21T11:51:28.6825+00:00] Policy Is Allowed: vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:28.6825+00:00] Validating properties data for policy: vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:28.6825+00:00] Validating properties policy for policy: vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.6826+00:00] Validation successful for policy: vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.6827+00:00] Directory created: /opt/policies/vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.6828+00:00] Policy file saved: /opt/policies/vehicle/policy.rego policy-opa-pdp | INFO[2025-06-21T11:51:28.6829+00:00] Directory created: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.6830+00:00] Data file saved: /opt/data/node/vehicle/data.json policy-opa-pdp | DEBU[2025-06-21T11:51:28.6831+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-21T11:51:28.6991+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-21T11:51:28.7029+00:00] storage not found creating : /node/vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.7031+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "vehicle", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:28.7031+00:00] Loaded Policy: vehicle policy-opa-pdp | INFO[2025-06-21T11:51:28.7033+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-21T11:51:28.7034+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/21 11:51:28 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:51:28.7036+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"68464c2b-48c1-409f-9dd8-77dcf323b352","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"a49b769d-0bc4-4466-9628-19d9efb2c8e6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506688703","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:51:28.7037+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:51:28.7037+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:51:28.7113+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"68464c2b-48c1-409f-9dd8-77dcf323b352","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"a49b769d-0bc4-4466-9628-19d9efb2c8e6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506688703","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:28.7114+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:28.7115+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/21 11:51:45 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:51:45.9330+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"5ff917e7-fc62-44ab-b6e4-bc2fb6c2c1c0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506705932","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:45.9331+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-21T11:51:45.9411+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"5ff917e7-fc62-44ab-b6e4-bc2fb6c2c1c0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506705932","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:45.9412+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:45.9413+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-21T11:51:52.7562+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.7563+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7564+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-21T11:51:52.7678+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.7683+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-21T11:51:52.7685+00:00] data : [map[op:add path:/round value:trail]] policy-opa-pdp | INFO[2025-06-21T11:51:52.7686+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7688+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-21T11:51:52.7689+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-21T11:51:52.7692+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-21T11:51:52.7693+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7694+00:00] path : round policy-opa-pdp | INFO[2025-06-21T11:51:52.7695+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-21T11:51:52.7696+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-21T11:51:52.7697+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-21T11:51:52.7764+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.7765+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7765+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-21T11:51:52.7894+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.7899+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-21T11:51:52.7900+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] policy-opa-pdp | INFO[2025-06-21T11:51:52.7901+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7902+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-21T11:51:52.7903+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-21T11:51:52.7904+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-21T11:51:52.7905+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7905+00:00] path : round policy-opa-pdp | INFO[2025-06-21T11:51:52.7906+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-21T11:51:52.7908+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-21T11:51:52.7909+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-21T11:51:52.7976+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.7977+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.7977+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-21T11:51:52.8104+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.8107+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-21T11:51:52.8108+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-21T11:51:52.8108+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.8109+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-21T11:51:52.8110+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-21T11:51:52.8111+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-21T11:51:52.8112+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.8112+00:00] path : round policy-opa-pdp | INFO[2025-06-21T11:51:52.8112+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-21T11:51:52.8113+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-21T11:51:52.8114+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-21T11:51:52.8208+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:52.8208+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:52.8210+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | DEBU[2025-06-21T11:51:52.8302+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:52.8303+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:52.8306+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:51:52.8307+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"1677c810-cbb2-49b5-9b50-8fc47e2c264e","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1280,"timer_rego_query_compile_ns":195683,"timer_rego_query_eval_ns":583228,"timer_rego_query_parse_ns":116181,"timer_sdk_decision_eval_ns":1115356},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-21T11:51:52Z","timestamp":"2025-06-21T11:51:52.830905247Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:51:52.8325+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "1677c810-cbb2-49b5-9b50-8fc47e2c264e", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:52.8456+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:52.8456+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:52.8458+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-21T11:51:52.8459+00:00] Policy Name vehile does not exist policy-opa-pdp | DEBU[2025-06-21T11:51:52.8522+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:51:52.8523+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:51:52.8527+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:51:52.8528+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"480fcc4a-14f1-4837-b1fc-22f268fc0070","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":930,"timer_rego_query_eval_ns":374966,"timer_sdk_decision_eval_ns":523678},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-21T11:51:52Z","timestamp":"2025-06-21T11:51:52.852917177Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:51:52.8536+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "480fcc4a-14f1-4837-b1fc-22f268fc0070", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:53.1175+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"40cb89ce-924b-4aa6-8d74-47387d8df365","timestampMs":1750506713099,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:53.1176+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:51:53.1178+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"40cb89ce-924b-4aa6-8d74-47387d8df365","timestampMs":1750506713099,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-21T11:51:53.1178+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-21T11:51:53.1178+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-21T11:51:53.1179+00:00] Deleting Policy from OPA : /vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.1202+00:00] Removing policy directory: /opt/policies/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.1206+00:00] Deleting data from OPA : /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.1206+00:00] Analyzing dataPath: /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.1207+00:00] Path segments: [ node vehicle] policy-opa-pdp | DEBU[2025-06-21T11:51:53.1207+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.1207+00:00] Removing data directory: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-21T11:51:53.1210+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:53.1210+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:51:53.1210+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-21T11:51:53.1211+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/21 11:51:53 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-21T11:51:53.1212+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"40cb89ce-924b-4aa6-8d74-47387d8df365","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"3254fdbc-6122-4114-a94a-27a1e2a6113d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506713121","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:51:53.1213+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:51:53.1213+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:51:53.1287+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"40cb89ce-924b-4aa6-8d74-47387d8df365","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"3254fdbc-6122-4114-a94a-27a1e2a6113d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506713121","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:53.1287+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:53.1287+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-21T11:51:53.4685+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:51:53.4687+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | WARN[2025-06-21T11:51:53.4688+00:00] Error in reading data under /node/vehicle path policy-opa-pdp | ERRO[2025-06-21T11:51:53.4690+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist policy-opa-pdp | INFO[2025-06-21T11:51:53.4794+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-21T11:51:53.4796+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-21T11:51:53.4797+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-21T11:51:53.4797+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-21T11:51:53.4797+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] policy-opa-pdp | ERRO[2025-06-21T11:51:53.4797+00:00] Policy associated with the patch request does not exists policy-opa-pdp | DEBU[2025-06-21T11:51:54.1487+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8adeaa83-8371-454d-82eb-ab314833e655","timestampMs":1750506714130,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:54.1492+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:51:54.1499+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8adeaa83-8371-454d-82eb-ab314833e655","timestampMs":1750506714130,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:51:54.1500+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:51:54.1503+00:00] Policy is new and should be deployed: abac 1.0.7 policy-opa-pdp | DEBU[2025-06-21T11:51:54.1503+00:00] Policy Is Allowed: abac policy-opa-pdp | DEBU[2025-06-21T11:51:54.1504+00:00] Validating properties data for policy: abac policy-opa-pdp | DEBU[2025-06-21T11:51:54.1504+00:00] Validating properties policy for policy: abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1505+00:00] Validation successful for policy: abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1508+00:00] Directory created: /opt/policies/abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1511+00:00] Policy file saved: /opt/policies/abac/policy.rego policy-opa-pdp | INFO[2025-06-21T11:51:54.1513+00:00] Directory created: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1516+00:00] Data file saved: /opt/data/node/abac/data.json policy-opa-pdp | DEBU[2025-06-21T11:51:54.1516+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-21T11:51:54.1718+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-21T11:51:54.1746+00:00] storage not found creating : /node/abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1748+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.abac" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "abac" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "abac", policy-opa-pdp | "policy-version": "1.0.7" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:51:54.1748+00:00] Loaded Policy: abac policy-opa-pdp | INFO[2025-06-21T11:51:54.1748+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | 2025/06/21 11:51:54 KafkaProducer or producer produce message policy-opa-pdp | INFO[2025-06-21T11:51:54.1749+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-21T11:51:54.1750+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8adeaa83-8371-454d-82eb-ab314833e655","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"14e3b33b-8ea0-4f50-82f8-07eb393270ab","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506714174","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-21T11:51:54.1751+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:51:54.1751+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:51:54.1830+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8adeaa83-8371-454d-82eb-ab314833e655","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"14e3b33b-8ea0-4f50-82f8-07eb393270ab","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506714174","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:51:54.1831+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:51:54.1831+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-21T11:52:18.2253+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-21T11:52:18.2254+00:00] datapath to get Data : /node/abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.2255+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} policy-opa-pdp | DEBU[2025-06-21T11:52:18.2357+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:52:18.2358+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:52:18.2362+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:52:18.2363+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"6a8d58ba-4724-46d1-91d5-4feb9e4acaef","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":850,"timer_rego_query_compile_ns":135482,"timer_rego_query_eval_ns":820112,"timer_rego_query_parse_ns":109551,"timer_sdk_decision_eval_ns":1329019},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-21T11:52:18Z","timestamp":"2025-06-21T11:52:18.236500637Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:52:18.2383+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "6a8d58ba-4724-46d1-91d5-4feb9e4acaef", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:52:18.2483+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:52:18.2484+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:52:18.2487+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-21T11:52:18.2489+00:00] Policy Name abc does not exist policy-opa-pdp | DEBU[2025-06-21T11:52:18.2551+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-21T11:52:18.2552+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-21T11:52:18.2554+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-21T11:52:18.2555+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"400a8561-3745-4c29-8999-2eedc682aa6f","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"4403f7d1-9547-4557-bd91-dd1159a3a127","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":610,"timer_rego_query_eval_ns":472857,"timer_sdk_decision_eval_ns":613409},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-21T11:52:18Z","timestamp":"2025-06-21T11:52:18.255556636Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-21T11:52:18.2564+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "400a8561-3745-4c29-8999-2eedc682aa6f", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:52:18.7885+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","timestampMs":1750506738770,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-21T11:52:18.7888+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-21T11:52:18.7891+00:00] PDP_UPDATE Message received: {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","timestampMs":1750506738770,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-21T11:52:18.7894+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-21T11:52:18.7895+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment policy-opa-pdp | DEBU[2025-06-21T11:52:18.7897+00:00] Deleting Policy from OPA : /abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.7926+00:00] Removing policy directory: /opt/policies/abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.7928+00:00] Deleting data from OPA : /node/abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.7928+00:00] Analyzing dataPath: /node/abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.7928+00:00] Path segments: [ node abac] policy-opa-pdp | DEBU[2025-06-21T11:52:18.7928+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac policy-opa-pdp | DEBU[2025-06-21T11:52:18.7929+00:00] Removing data directory: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-21T11:52:18.7931+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-21T11:52:18.7932+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-21T11:52:18.7933+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-21T11:52:18.7933+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-21T11:52:18.7934+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"205df326-e233-482a-b13b-f02884364058","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506738793","deploymentInstanceInfo":""} policy-opa-pdp | 2025/06/21 11:52:18 KafkaProducer or producer produce message policy-opa-pdp | INFO[2025-06-21T11:52:18.7935+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-21T11:52:18.7935+00:00] 0 policy-opa-pdp | DEBU[2025-06-21T11:52:18.8005+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"205df326-e233-482a-b13b-f02884364058","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506738793","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-21T11:52:18.8006+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-21T11:52:18.8006+00:00] discarding event of type PDP_STATUS policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-21T11:47:40.977+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 61 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-21T11:47:40.979+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-21T11:47:42.405+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-21T11:47:42.492+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 74 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-21T11:47:43.416+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-21T11:47:43.430+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-21T11:47:43.432+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-21T11:47:43.432+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-21T11:47:43.490+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-21T11:47:43.490+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2457 ms policy-pap | [2025-06-21T11:47:43.898+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-21T11:47:43.971+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-21T11:47:44.016+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-21T11:47:44.412+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-21T11:47:44.464+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-21T11:47:44.676+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd policy-pap | [2025-06-21T11:47:44.678+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-21T11:47:44.774+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-21T11:47:46.705+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-21T11:47:46.709+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-21T11:47:48.000+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-21T11:47:48.055+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:48.194+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:48.194+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:48.194+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506468193 policy-pap | [2025-06-21T11:47:48.197+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-1, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-21T11:47:48.198+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-21T11:47:48.198+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:48.206+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:48.206+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:48.206+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506468206 policy-pap | [2025-06-21T11:47:48.206+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-21T11:47:48.556+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-21T11:47:48.680+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-21T11:47:48.762+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-21T11:47:48.979+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-21T11:47:49.770+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-21T11:47:49.883+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-21T11:47:49.911+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-21T11:47:49.932+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-21T11:47:49.933+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-21T11:47:49.933+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-21T11:47:49.934+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-21T11:47:49.934+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-21T11:47:49.934+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-21T11:47:49.934+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-21T11:47:49.936+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2d45db20 policy-pap | [2025-06-21T11:47:49.946+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-21T11:47:49.947+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-21T11:47:49.947+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:49.955+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:49.955+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:49.955+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506469955 policy-pap | [2025-06-21T11:47:49.956+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-21T11:47:49.956+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-21T11:47:49.956+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8c13636c-1985-4c4d-b4ad-7662650d5cda, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7540aa55 policy-pap | [2025-06-21T11:47:49.956+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8c13636c-1985-4c4d-b4ad-7662650d5cda, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-21T11:47:49.957+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-21T11:47:49.957+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:49.963+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:49.963+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:49.963+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506469963 policy-pap | [2025-06-21T11:47:49.964+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-21T11:47:49.964+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-21T11:47:49.964+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=8c13636c-1985-4c4d-b4ad-7662650d5cda, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-21T11:47:49.964+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-21T11:47:49.965+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a5f05f88-32ec-4fc7-a5ac-45c332532c46, alive=false, publisher=null]]: starting policy-pap | [2025-06-21T11:47:49.976+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-21T11:47:49.977+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:49.990+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-21T11:47:50.006+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:50.006+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:50.006+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506470006 policy-pap | [2025-06-21T11:47:50.006+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a5f05f88-32ec-4fc7-a5ac-45c332532c46, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-21T11:47:50.006+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b7362be5-529e-455f-b8f8-48033cc1dc76, alive=false, publisher=null]]: starting policy-pap | [2025-06-21T11:47:50.007+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-21T11:47:50.007+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-21T11:47:50.008+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-21T11:47:50.011+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-21T11:47:50.011+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-21T11:47:50.011+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750506470011 policy-pap | [2025-06-21T11:47:50.011+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b7362be5-529e-455f-b8f8-48033cc1dc76, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-21T11:47:50.011+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-21T11:47:50.012+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-21T11:47:50.013+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-21T11:47:50.014+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-21T11:47:50.016+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-21T11:47:50.016+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-21T11:47:50.016+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-21T11:47:50.016+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-21T11:47:50.017+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-21T11:47:50.017+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-21T11:47:50.018+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.86 seconds (process running for 10.445) policy-pap | [2025-06-21T11:47:50.020+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-21T11:47:50.509+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: yLVGLqvHTV69UyVchcNcSA policy-pap | [2025-06-21T11:47:50.510+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: yLVGLqvHTV69UyVchcNcSA policy-pap | [2025-06-21T11:47:50.522+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-21T11:47:50.522+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: yLVGLqvHTV69UyVchcNcSA policy-pap | [2025-06-21T11:47:50.551+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-21T11:47:50.552+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-21T11:47:50.566+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:47:50.567+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Cluster ID: yLVGLqvHTV69UyVchcNcSA policy-pap | [2025-06-21T11:47:50.700+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-21T11:47:50.773+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:47:51.010+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:47:51.023+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:47:51.439+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:47:51.513+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-21T11:47:51.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] (Re-)joining group policy-pap | [2025-06-21T11:47:51.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Request joining group due to: need to re-join with the given member-id: consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8 policy-pap | [2025-06-21T11:47:51.546+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] (Re-)joining group policy-pap | [2025-06-21T11:47:52.321+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-21T11:47:52.324+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-21T11:47:52.329+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1 policy-pap | [2025-06-21T11:47:52.329+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-21T11:47:54.574+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8', protocol='range'} policy-pap | [2025-06-21T11:47:54.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Finished assignment for group at generation 1: {consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-21T11:47:54.644+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3-0459b753-4a70-48d0-823d-9daace652ae8', protocol='range'} policy-pap | [2025-06-21T11:47:54.646+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-21T11:47:54.649+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-21T11:47:54.667+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-21T11:47:54.684+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5-3, groupId=c84fc2a7-75d3-47a3-b9d0-c441ac0fc3f5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-21T11:47:55.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1', protocol='range'} policy-pap | [2025-06-21T11:47:55.335+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-21T11:47:55.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-2e426b59-265a-42cc-ac6a-6f6cfb2677f1', protocol='range'} policy-pap | [2025-06-21T11:47:55.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-21T11:47:55.341+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-21T11:47:55.344+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-21T11:47:55.346+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-21T11:48:41.623+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-21T11:48:41.623+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-21T11:48:41.624+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-pap | [2025-06-21T11:49:45.301+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-21T11:49:45.302+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"f6afdd01-58e9-4e6c-adbe-6e11e98c335c","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750506585254","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:45.302+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"f6afdd01-58e9-4e6c-adbe-6e11e98c335c","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750506585254","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:45.309+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-21T11:49:45.848+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:49:45.848+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:49:45.848+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:49:45.849+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c9bc964d-b03e-4503-a19a-a42aaedded09, expireMs=1750506615849] policy-pap | [2025-06-21T11:49:45.850+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c9bc964d-b03e-4503-a19a-a42aaedded09, expireMs=1750506615849] policy-pap | [2025-06-21T11:49:45.851+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:49:45.852+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:49:45.854+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"c9bc964d-b03e-4503-a19a-a42aaedded09","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:45.901+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"c9bc964d-b03e-4503-a19a-a42aaedded09","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:45.901+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:49:45.904+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"c9bc964d-b03e-4503-a19a-a42aaedded09","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:45.904+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:49:45.928+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c9bc964d-b03e-4503-a19a-a42aaedded09","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"edde39c0-d5e6-4855-89ec-91e8ded44e9e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585915","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:45.929+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c9bc964d-b03e-4503-a19a-a42aaedded09 policy-pap | [2025-06-21T11:49:45.933+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"c9bc964d-b03e-4503-a19a-a42aaedded09","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"edde39c0-d5e6-4855-89ec-91e8ded44e9e","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585915","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:45.933+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:49:45.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:49:45.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:49:45.934+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c9bc964d-b03e-4503-a19a-a42aaedded09, expireMs=1750506615849] policy-pap | [2025-06-21T11:49:45.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:49:45.934+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 start publishing next request policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange starting policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange starting listener policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange starting timer policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=e4a529b0-08f1-4c5d-841b-661c630ec9c0, expireMs=1750506615947] policy-pap | [2025-06-21T11:49:45.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange starting enqueue policy-pap | [2025-06-21T11:49:45.948+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange started policy-pap | [2025-06-21T11:49:45.948+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=e4a529b0-08f1-4c5d-841b-661c630ec9c0, expireMs=1750506615947] policy-pap | [2025-06-21T11:49:45.948+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-21T11:49:45.952+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:45.965+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:45.966+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-21T11:49:45.970+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"146f2f2a-8f26-431a-8f38-09108e29be91","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585960","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:45.970+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e4a529b0-08f1-4c5d-841b-661c630ec9c0 policy-pap | [2025-06-21T11:49:45.972+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-21T11:49:46.261+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","timestampMs":1750506585826,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:46.262+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-21T11:49:46.264+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"e4a529b0-08f1-4c5d-841b-661c630ec9c0","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"146f2f2a-8f26-431a-8f38-09108e29be91","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506585960","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange stopping policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange stopping enqueue policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange stopping timer policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=e4a529b0-08f1-4c5d-841b-661c630ec9c0, expireMs=1750506615947] policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange stopping listener policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange stopped policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpStateChange successful policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 start publishing next request policy-pap | [2025-06-21T11:49:46.265+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=9a85a9c2-9120-4e71-aa4e-5a7076f64d19, expireMs=1750506616266] policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:49:46.266+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","timestampMs":1750506586256,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:46.273+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","timestampMs":1750506586256,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:46.273+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","timestampMs":1750506586256,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:49:46.273+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:49:46.273+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:49:46.281+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"055b4de3-d337-4200-b30c-1e10959219ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506586271","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:46.281+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9a85a9c2-9120-4e71-aa4e-5a7076f64d19 policy-pap | [2025-06-21T11:49:46.281+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"9a85a9c2-9120-4e71-aa4e-5a7076f64d19","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"055b4de3-d337-4200-b30c-1e10959219ea","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506586271","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9a85a9c2-9120-4e71-aa4e-5a7076f64d19, expireMs=1750506616266] policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:49:46.282+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:49:46.288+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:49:46.288+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:49:50.021+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-21T11:50:15.850+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c9bc964d-b03e-4503-a19a-a42aaedded09, expireMs=1750506615849] policy-pap | [2025-06-21T11:50:15.948+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=e4a529b0-08f1-4c5d-841b-661c630ec9c0, expireMs=1750506615947] policy-pap | [2025-06-21T11:50:45.269+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"7192d6da-4f14-480e-b111-c9ccd8f7b49c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506645254","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:50:45.269+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"7192d6da-4f14-480e-b111-c9ccd8f7b49c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506645254","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:50:45.270+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-21T11:51:03.937+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-21T11:51:03.938+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-21T11:51:03.938+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy zoneB 1.0.6 policy-pap | [2025-06-21T11:51:03.939+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=1 policy-pap | [2025-06-21T11:51:03.940+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup policy-pap | [2025-06-21T11:51:03.940+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup policy-pap | [2025-06-21T11:51:03.955+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-21T11:51:03Z, user=policyadmin)] policy-pap | [2025-06-21T11:51:03.989+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:51:03.989+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:51:03.989+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:51:03.989+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=cd677b84-d2a3-4c1b-a7fa-505658003fb1, expireMs=1750506693989] policy-pap | [2025-06-21T11:51:03.990+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:51:03.990+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:51:03.990+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","timestampMs":1750506663939,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:03.991+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=cd677b84-d2a3-4c1b-a7fa-505658003fb1, expireMs=1750506693989] policy-pap | [2025-06-21T11:51:04.002+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","timestampMs":1750506663939,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:04.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:04.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","timestampMs":1750506663939,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:04.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:04.048+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"9d0c3142-8a87-486b-ae32-59b6d9d8c238","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506664037","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cd677b84-d2a3-4c1b-a7fa-505658003fb1 policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"cd677b84-d2a3-4c1b-a7fa-505658003fb1","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"9d0c3142-8a87-486b-ae32-59b6d9d8c238","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506664037","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:51:04.049+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=cd677b84-d2a3-4c1b-a7fa-505658003fb1, expireMs=1750506693989] policy-pap | [2025-06-21T11:51:04.050+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:51:04.050+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:51:04.062+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:51:04.062+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:51:04.062+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-21T11:51:27.541+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-21T11:51:27.542+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-21T11:51:27.542+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 policy-pap | [2025-06-21T11:51:27.543+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=0 policy-pap | [2025-06-21T11:51:27.543+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup policy-pap | [2025-06-21T11:51:27.543+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup policy-pap | [2025-06-21T11:51:27.559+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-21T11:51:27Z, user=policyadmin)] policy-pap | [2025-06-21T11:51:27.575+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:51:27.575+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:51:27.575+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:51:27.575+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=d1299325-b9f9-4d29-b66a-68793b7dea43, expireMs=1750506717575] policy-pap | [2025-06-21T11:51:27.575+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:51:27.576+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:51:27.576+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d1299325-b9f9-4d29-b66a-68793b7dea43","timestampMs":1750506687543,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:27.584+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d1299325-b9f9-4d29-b66a-68793b7dea43","timestampMs":1750506687543,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:27.584+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:27.589+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d1299325-b9f9-4d29-b66a-68793b7dea43","timestampMs":1750506687543,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:27.589+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:27.595+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d1299325-b9f9-4d29-b66a-68793b7dea43","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"624c5ea4-2dab-46c9-b9af-e2cc1385d952","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506687586","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:27.596+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d1299325-b9f9-4d29-b66a-68793b7dea43","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"624c5ea4-2dab-46c9-b9af-e2cc1385d952","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506687586","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:27.596+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d1299325-b9f9-4d29-b66a-68793b7dea43 policy-pap | [2025-06-21T11:51:27.596+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:51:27.596+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:51:27.596+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:51:27.597+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d1299325-b9f9-4d29-b66a-68793b7dea43, expireMs=1750506717575] policy-pap | [2025-06-21T11:51:27.597+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:51:27.597+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:51:27.614+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-21T11:51:27.613+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:51:27.614+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:51:27.958+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-21T11:51:27.961+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: zoneB null policy-pap | [2025-06-21T11:51:27.961+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-21T11:51:28.660+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup policy-pap | [2025-06-21T11:51:28.660+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-10] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-21T11:51:28.660+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy vehicle 1.0.6 policy-pap | [2025-06-21T11:51:28.660+00:00|INFO|SessionData|http-nio-6969-exec-10] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=1 policy-pap | [2025-06-21T11:51:28.660+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group opaGroup policy-pap | [2025-06-21T11:51:28.661+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group opaGroup policy-pap | [2025-06-21T11:51:28.668+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-21T11:51:28Z, user=policyadmin)] policy-pap | [2025-06-21T11:51:28.677+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|TimerManager|http-nio-6969-exec-10] update timer registered Timer [name=68464c2b-48c1-409f-9dd8-77dcf323b352, expireMs=1750506718678] policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:51:28.678+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68464c2b-48c1-409f-9dd8-77dcf323b352","timestampMs":1750506688660,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:28.685+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68464c2b-48c1-409f-9dd8-77dcf323b352","timestampMs":1750506688660,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:28.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:28.688+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68464c2b-48c1-409f-9dd8-77dcf323b352","timestampMs":1750506688660,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:28.689+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:28.715+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"68464c2b-48c1-409f-9dd8-77dcf323b352","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"a49b769d-0bc4-4466-9628-19d9efb2c8e6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506688703","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:28.716+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"68464c2b-48c1-409f-9dd8-77dcf323b352","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"a49b769d-0bc4-4466-9628-19d9efb2c8e6","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506688703","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:28.717+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:51:28.717+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 68464c2b-48c1-409f-9dd8-77dcf323b352 policy-pap | [2025-06-21T11:51:28.717+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:51:28.717+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:51:28.717+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=68464c2b-48c1-409f-9dd8-77dcf323b352, expireMs=1750506718678] policy-pap | [2025-06-21T11:51:28.718+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:51:28.718+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:51:28.727+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:51:28.728+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-21T11:51:28.728+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:51:33.989+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=cd677b84-d2a3-4c1b-a7fa-505658003fb1, expireMs=1750506693989] policy-pap | [2025-06-21T11:51:45.944+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"5ff917e7-fc62-44ab-b6e4-bc2fb6c2c1c0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506705932","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:45.945+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"5ff917e7-fc62-44ab-b6e4-bc2fb6c2c1c0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506705932","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:45.946+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-21T11:51:50.031+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-21T11:51:53.099+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup policy-pap | [2025-06-21T11:51:53.099+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-21T11:51:53.099+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 policy-pap | [2025-06-21T11:51:53.100+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=0 policy-pap | [2025-06-21T11:51:53.100+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup policy-pap | [2025-06-21T11:51:53.100+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup policy-pap | [2025-06-21T11:51:53.106+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-21T11:51:53Z, user=policyadmin)] policy-pap | [2025-06-21T11:51:53.113+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:51:53.113+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=40cb89ce-924b-4aa6-8d74-47387d8df365, expireMs=1750506743114] policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"40cb89ce-924b-4aa6-8d74-47387d8df365","timestampMs":1750506713099,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:53.114+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=40cb89ce-924b-4aa6-8d74-47387d8df365, expireMs=1750506743114] policy-pap | [2025-06-21T11:51:53.121+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"40cb89ce-924b-4aa6-8d74-47387d8df365","timestampMs":1750506713099,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:53.121+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:53.123+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"40cb89ce-924b-4aa6-8d74-47387d8df365","timestampMs":1750506713099,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:53.123+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:53.130+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"40cb89ce-924b-4aa6-8d74-47387d8df365","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"3254fdbc-6122-4114-a94a-27a1e2a6113d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506713121","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:53.130+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 40cb89ce-924b-4aa6-8d74-47387d8df365 policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"40cb89ce-924b-4aa6-8d74-47387d8df365","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"3254fdbc-6122-4114-a94a-27a1e2a6113d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506713121","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=40cb89ce-924b-4aa6-8d74-47387d8df365, expireMs=1750506743114] policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:51:53.131+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:51:53.141+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:51:53.141+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:51:53.141+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-21T11:51:53.459+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup policy-pap | [2025-06-21T11:51:53.459+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-4] failed to undeploy policy: vehicle null policy-pap | [2025-06-21T11:51:53.459+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-4] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy abac 1.0.7 to subgroup opaGroup opa count=2 policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy abac 1.0.7 policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|SessionData|http-nio-6969-exec-3] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=1 policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group opaGroup policy-pap | [2025-06-21T11:51:54.130+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group opaGroup policy-pap | [2025-06-21T11:51:54.137+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-21T11:51:54Z, user=policyadmin)] policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=8adeaa83-8371-454d-82eb-ab314833e655, expireMs=1750506744145] policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:51:54.145+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8adeaa83-8371-454d-82eb-ab314833e655","timestampMs":1750506714130,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:54.151+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8adeaa83-8371-454d-82eb-ab314833e655","timestampMs":1750506714130,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:54.152+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:54.154+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8adeaa83-8371-454d-82eb-ab314833e655","timestampMs":1750506714130,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:51:54.155+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:51:54.185+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8adeaa83-8371-454d-82eb-ab314833e655","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"14e3b33b-8ea0-4f50-82f8-07eb393270ab","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506714174","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8adeaa83-8371-454d-82eb-ab314833e655, expireMs=1750506744145] policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:51:54.186+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:51:54.187+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8adeaa83-8371-454d-82eb-ab314833e655","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"14e3b33b-8ea0-4f50-82f8-07eb393270ab","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506714174","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:51:54.188+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8adeaa83-8371-454d-82eb-ab314833e655 policy-pap | [2025-06-21T11:51:54.194+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:51:54.194+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:51:54.194+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group opaGroup policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy abac 1.0.7 policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|SessionData|http-nio-6969-exec-5] add update opa-384c17a2-8037-4a68-95a5-eea37a9fe744 opaGroup opa policies=0 policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group opaGroup policy-pap | [2025-06-21T11:52:18.770+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group opaGroup policy-pap | [2025-06-21T11:52:18.776+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-21T11:52:18Z, user=policyadmin)] policy-pap | [2025-06-21T11:52:18.783+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting policy-pap | [2025-06-21T11:52:18.783+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting listener policy-pap | [2025-06-21T11:52:18.783+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting timer policy-pap | [2025-06-21T11:52:18.783+00:00|INFO|TimerManager|http-nio-6969-exec-5] update timer registered Timer [name=1f7a7bf3-09ae-49f4-a65a-bd531ef8e211, expireMs=1750506768783] policy-pap | [2025-06-21T11:52:18.784+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate starting enqueue policy-pap | [2025-06-21T11:52:18.784+00:00|INFO|ServiceManager|http-nio-6969-exec-5] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate started policy-pap | [2025-06-21T11:52:18.784+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","timestampMs":1750506738770,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:52:18.789+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","timestampMs":1750506738770,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:52:18.789+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:52:18.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-55a8db0f-2911-4b47-95ac-737be19bcc25","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","timestampMs":1750506738770,"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-21T11:52:18.792+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"205df326-e233-482a-b13b-f02884364058","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506738793","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f7a7bf3-09ae-49f4-a65a-bd531ef8e211","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-384c17a2-8037-4a68-95a5-eea37a9fe744","requestId":"205df326-e233-482a-b13b-f02884364058","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750506738793","deploymentInstanceInfo":""} policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1f7a7bf3-09ae-49f4-a65a-bd531ef8e211 policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping enqueue policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping timer policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=1f7a7bf3-09ae-49f4-a65a-bd531ef8e211, expireMs=1750506768783] policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopping listener policy-pap | [2025-06-21T11:52:18.803+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate stopped policy-pap | [2025-06-21T11:52:18.812+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 PdpUpdate successful policy-pap | [2025-06-21T11:52:18.812+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-384c17a2-8037-4a68-95a5-eea37a9fe744 has no more requests policy-pap | [2025-06-21T11:52:18.812+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-21T11:52:19.093+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-21T11:52:19.094+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-7] failed to undeploy policy: abac null policy-pap | [2025-06-21T11:52:19.094+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-7] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-21T11:52:23.114+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=40cb89ce-924b-4aa6-8d74-47387d8df365, expireMs=1750506743114] postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-21 11:47:12.377 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-21 11:47:12.379 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-21 11:47:12.389 UTC [50] LOG: database system was shut down at 2025-06-21 11:47:11 UTC postgres | 2025-06-21 11:47:12.394 UTC [47] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-21 11:47:13.846 UTC [47] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-21 11:47:13.847 UTC [47] LOG: aborting any active transactions postgres | 2025-06-21 11:47:13.849 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 postgres | 2025-06-21 11:47:13.850 UTC [48] LOG: shutting down postgres | 2025-06-21 11:47:13.852 UTC [48] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-21 11:47:14.405 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.422 s, sync=0.125 s, total=0.555 s; sync files=1788, longest=0.009 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-21 11:47:14.413 UTC [47] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-21 11:47:14.473 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-21 11:47:14.474 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-21 11:47:14.474 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-21 11:47:14.478 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-21 11:47:14.486 UTC [100] LOG: database system was shut down at 2025-06-21 11:47:14 UTC postgres | 2025-06-21 11:47:14.490 UTC [1] LOG: database system is ready to accept connections postgres | 2025-06-21 11:52:14.581 UTC [98] LOG: checkpoint starting: time postgres | 2025-06-21 11:53:19.403 UTC [98] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.790 s, sync=0.023 s, total=64.823 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/31502E0, redo lsn=0/314DDE0 prometheus | time=2025-06-21T11:47:09.201Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-21T11:47:09.201Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-21T11:47:09.201Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-21T11:47:09.204Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-21T11:47:09.208Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-21T11:47:09.210Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-21T11:47:09.210Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-21T11:47:09.210Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-21T11:47:09.221Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-21T11:47:09.221Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.64µs prometheus | time=2025-06-21T11:47:09.221Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-21T11:47:09.222Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=395.964µs prometheus | time=2025-06-21T11:47:09.222Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=39.401µs wal_replay_duration=424.785µs wbl_replay_duration=190ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.64µs total_replay_duration=537.906µs prometheus | time=2025-06-21T11:47:09.225Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-21T11:47:09.225Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-21T11:47:09.225Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-21T11:47:09.226Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-21T11:47:09.226Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.22µs remote_storage=2.48µs web_handler=620ns query_engine=1.42µs scrape=273.872µs scrape_sd=147.211µs notify=144.702µs notify_sd=12.34µs rules=2.22µs tracing=4.59µs filename=/etc/prometheus/prometheus.yml totalDuration=1.370493ms prometheus | time=2025-06-21T11:47:09.226Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-21T11:47:09.227Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-21 11:47:10,238] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,240] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,240] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,240] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,240] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,241] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-21 11:47:10,242] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-21 11:47:10,242] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-21 11:47:10,242] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-21 11:47:10,243] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-21 11:47:10,243] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,243] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,243] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,243] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,243] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-21 11:47:10,243] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-21 11:47:10,253] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-21 11:47:10,255] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-21 11:47:10,255] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-21 11:47:10,257] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-21 11:47:10,264] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,264] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,264] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,264] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,265] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,266] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,267] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-21 11:47:10,268] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,268] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,269] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-21 11:47:10,270] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,270] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-21 11:47:10,272] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,272] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,273] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-21 11:47:10,273] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-21 11:47:10,273] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,291] INFO Logging initialized @374ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-21 11:47:10,340] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-21 11:47:10,340] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-21 11:47:10,355] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-21 11:47:10,387] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-21 11:47:10,387] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-21 11:47:10,388] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-21 11:47:10,393] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-21 11:47:10,404] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-21 11:47:10,413] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-21 11:47:10,413] INFO Started @499ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-21 11:47:10,413] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-21 11:47:10,416] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-21 11:47:10,417] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-21 11:47:10,418] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-21 11:47:10,419] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-21 11:47:10,433] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-21 11:47:10,433] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-21 11:47:10,434] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-21 11:47:10,434] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-21 11:47:10,441] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-21 11:47:10,441] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-21 11:47:10,444] INFO Snapshot loaded in 10 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-21 11:47:10,445] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-21 11:47:10,446] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-21 11:47:10,456] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-21 11:47:10,456] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-21 11:47:10,470] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-21 11:47:10,471] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-21 11:47:11,513] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container grafana Stopping Container policy-opa-pdp Stopping Container policy-csit Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-opa-pdp Stopped Container policy-opa-pdp Removing Container policy-opa-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2100 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins14963960156290112557.sh ---> sysstat.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins13331501863158695952.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins13402129906098868852.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-N0J9 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-N0J9/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins9374226358711068164.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/config2915666155919563751tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins9188808927335031726.sh ---> create-netrc.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins1738929758852075483.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-N0J9 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-N0J9/bin to PATH [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins882827059180778701.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins12912109853128799930.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-N0J9 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-N0J9/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash -l /tmp/jenkins10876175133729388611.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-N0J9 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-N0J9/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-policy-opa-pdp/184 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22753 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 923 24007 0 7236 30788 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:12:31:cc brd ff:ff:ff:ff:ff:ff inet 10.30.107.233/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85793sec preferred_lft 85793sec inet6 fe80::f816:3eff:fe12:31cc/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:af:e2:f5:bc brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:afff:fee2:f5bc/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22753) 06/21/25 _x86_64_ (8 CPU) 11:44:31 LINUX RESTART (8 CPU) 11:45:02 tps rtps wtps bread/s bwrtn/s 11:46:01 308.79 31.77 277.02 1062.69 50772.75 11:47:01 413.65 20.23 393.42 2253.76 189169.54 11:48:01 351.66 4.63 347.03 418.20 75722.58 11:49:01 18.03 0.00 18.03 0.00 21249.93 11:50:01 6.72 0.00 6.72 0.00 155.17 11:51:01 216.88 0.43 216.45 43.06 33858.09 11:52:01 8.50 0.00 8.50 0.00 206.90 11:53:01 8.23 0.00 8.23 0.00 226.90 11:54:01 56.44 0.80 55.64 40.39 1046.36 Average: 154.04 6.38 147.66 423.06 41361.44 11:45:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:46:01 30066948 31626084 2872272 8.72 69804 1800200 1372896 4.04 925308 1656776 162460 11:47:01 25237444 31594692 7701776 23.38 148028 6300676 1663784 4.90 1060816 6075968 665344 11:48:01 23286132 29955636 9653088 29.31 163648 6598560 7367208 21.68 2910804 6094872 2176 11:49:01 23318500 29988644 9620720 29.21 163836 6599432 7468796 21.97 2877968 6093584 128 11:50:01 23314260 29970436 9624960 29.22 164080 6585808 7538668 22.18 2896708 6077972 168 11:51:01 22673448 29866432 10265772 31.17 204512 7030268 8276832 24.35 3085168 6472540 496 11:52:01 22654860 29851512 10284360 31.22 204628 7032116 7841572 23.07 3135236 6438284 184 11:53:01 22656968 29854056 10282252 31.22 204744 7032312 7850240 23.10 3133012 6438052 672 11:54:01 24581600 31513932 8357620 25.37 205588 6762672 1601352 4.71 1530892 6191480 11304 Average: 24198907 30469047 8740313 26.53 169874 6193560 5664594 16.67 2395101 5726614 93659 11:45:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:46:01 lo 1.36 1.36 0.16 0.16 0.00 0.00 0.00 0.00 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:46:01 ens3 479.53 320.03 1696.48 81.39 0.00 0.00 0.00 0.00 11:47:01 br-9604bdc398a5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:47:01 lo 13.20 13.20 1.20 1.20 0.00 0.00 0.00 0.00 11:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:47:01 ens3 1146.66 699.92 33059.87 56.72 0.00 0.00 0.00 0.00 11:48:01 br-9604bdc398a5 48.06 65.66 3.09 309.63 0.00 0.00 0.00 0.00 11:48:01 veth0ffbdf2 91.90 91.65 16.03 18.63 0.00 0.00 0.00 0.00 11:48:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:48:01 veth29a511f 0.27 0.57 0.02 0.45 0.00 0.00 0.00 0.00 11:49:01 br-9604bdc398a5 0.50 0.30 0.03 0.02 0.00 0.00 0.00 0.00 11:49:01 veth0ffbdf2 0.17 0.18 0.54 0.02 0.00 0.00 0.00 0.00 11:49:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:49:01 veth29a511f 0.37 0.37 0.04 1.00 0.00 0.00 0.00 0.00 11:50:01 br-9604bdc398a5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:01 veth0ffbdf2 0.18 0.20 0.54 0.02 0.00 0.00 0.00 0.00 11:50:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:50:01 veth29a511f 0.53 0.53 0.05 1.16 0.00 0.00 0.00 0.00 11:51:01 br-9604bdc398a5 0.20 0.27 0.02 0.02 0.00 0.00 0.00 0.00 11:51:01 veth0ffbdf2 0.18 0.22 0.54 0.02 0.00 0.00 0.00 0.00 11:51:01 veth47ea314 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:01 lo 2.53 2.53 0.21 0.21 0.00 0.00 0.00 0.00 11:52:01 br-9604bdc398a5 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:52:01 veth0ffbdf2 101.67 101.27 12.04 24.88 0.00 0.00 0.00 0.00 11:52:01 veth47ea314 4.18 3.48 0.66 0.81 0.00 0.00 0.00 0.00 11:52:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:53:01 br-9604bdc398a5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:53:01 veth0ffbdf2 35.13 34.98 4.32 8.46 0.00 0.00 0.00 0.00 11:53:01 veth47ea314 2.88 2.58 0.31 0.30 0.00 0.00 0.00 0.00 11:53:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:54:01 lo 2.53 2.53 0.24 0.24 0.00 0.00 0.00 0.00 11:54:01 docker0 138.68 186.99 8.77 1349.07 0.00 0.00 0.00 0.00 11:54:01 ens3 2000.17 1299.45 37403.48 189.75 0.00 0.00 0.00 0.00 Average: lo 2.92 2.92 0.26 0.26 0.00 0.00 0.00 0.00 Average: docker0 15.44 20.81 0.98 150.17 0.00 0.00 0.00 0.00 Average: ens3 219.02 141.76 4152.90 20.82 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22753) 06/21/25 _x86_64_ (8 CPU) 11:44:31 LINUX RESTART (8 CPU) 11:45:02 CPU %user %nice %system %iowait %steal %idle 11:46:01 all 10.94 0.00 0.99 3.77 0.04 84.26 11:46:01 0 24.58 0.00 1.54 2.99 0.05 70.83 11:46:01 1 20.46 0.00 2.24 4.78 0.05 72.47 11:46:01 2 6.08 0.00 0.85 0.15 0.05 92.87 11:46:01 3 9.71 0.00 0.64 0.27 0.03 89.34 11:46:01 4 4.41 0.00 0.29 0.02 0.02 95.27 11:46:01 5 6.11 0.00 0.36 0.00 0.02 93.51 11:46:01 6 5.37 0.00 0.70 14.23 0.02 79.68 11:46:01 7 10.79 0.00 1.27 7.72 0.03 80.18 11:47:01 all 15.77 0.00 6.42 11.67 0.06 66.08 11:47:01 0 14.03 0.00 7.43 8.44 0.05 70.04 11:47:01 1 14.30 0.00 6.39 7.55 0.07 71.69 11:47:01 2 15.97 0.00 5.26 2.55 0.05 76.17 11:47:01 3 14.65 0.00 6.12 13.27 0.07 65.89 11:47:01 4 13.66 0.00 6.60 2.39 0.05 77.30 11:47:01 5 14.24 0.00 6.31 12.26 0.05 67.14 11:47:01 6 13.58 0.00 7.23 27.63 0.07 51.50 11:47:01 7 25.71 0.00 6.09 19.40 0.08 48.71 11:48:01 all 24.76 0.00 3.39 3.95 0.08 67.82 11:48:01 0 26.10 0.00 4.28 8.14 0.08 61.39 11:48:01 1 30.95 0.00 3.53 2.54 0.08 62.90 11:48:01 2 28.32 0.00 3.48 1.27 0.08 66.83 11:48:01 3 20.40 0.00 3.05 0.84 0.07 75.64 11:48:01 4 30.06 0.00 3.24 4.68 0.07 61.95 11:48:01 5 20.68 0.00 2.96 2.39 0.08 73.88 11:48:01 6 21.98 0.00 3.37 10.09 0.08 64.47 11:48:01 7 19.61 0.00 3.17 1.69 0.07 75.46 11:49:01 all 1.04 0.00 0.16 1.06 0.03 97.71 11:49:01 0 0.97 0.00 0.17 0.00 0.03 98.83 11:49:01 1 1.00 0.00 0.25 0.53 0.05 98.16 11:49:01 2 1.35 0.00 0.12 0.00 0.02 98.52 11:49:01 3 0.89 0.00 0.08 0.00 0.03 98.99 11:49:01 4 1.22 0.00 0.20 0.00 0.03 98.55 11:49:01 5 0.90 0.00 0.15 0.20 0.02 98.73 11:49:01 6 1.02 0.00 0.18 7.75 0.02 91.03 11:49:01 7 0.93 0.00 0.13 0.00 0.02 98.92 11:50:01 all 1.74 0.00 0.27 0.02 0.03 97.94 11:50:01 0 1.49 0.00 0.35 0.00 0.02 98.15 11:50:01 1 1.57 0.00 0.57 0.02 0.05 97.79 11:50:01 2 2.44 0.00 0.20 0.02 0.02 97.33 11:50:01 3 1.16 0.00 0.10 0.00 0.02 98.73 11:50:01 4 2.47 0.00 0.28 0.02 0.05 97.18 11:50:01 5 1.13 0.00 0.22 0.08 0.02 98.55 11:50:01 6 1.52 0.00 0.35 0.00 0.03 98.10 11:50:01 7 2.15 0.00 0.15 0.02 0.02 97.66 11:51:01 all 7.27 0.00 2.32 1.93 0.05 88.44 11:51:01 0 6.64 0.00 1.77 0.40 0.05 91.13 11:51:01 1 8.76 0.00 2.25 0.62 0.07 88.31 11:51:01 2 3.73 0.00 2.13 0.40 0.05 93.69 11:51:01 3 6.34 0.00 1.79 3.10 0.05 88.73 11:51:01 4 5.03 0.00 2.10 1.17 0.05 91.65 11:51:01 5 9.64 0.00 3.20 5.50 0.07 81.59 11:51:01 6 8.78 0.00 2.80 3.26 0.07 85.09 11:51:01 7 9.21 0.00 2.43 0.97 0.07 87.32 11:52:01 all 4.84 0.00 0.75 0.03 0.04 94.34 11:52:01 0 3.65 0.00 0.69 0.05 0.03 95.59 11:52:01 1 4.75 0.00 0.69 0.02 0.03 94.52 11:52:01 2 5.11 0.00 0.60 0.00 0.03 94.25 11:52:01 3 6.29 0.00 0.67 0.00 0.02 93.03 11:52:01 4 5.63 0.00 0.70 0.13 0.03 93.50 11:52:01 5 4.19 0.00 0.75 0.00 0.07 94.99 11:52:01 6 3.06 0.00 1.31 0.07 0.03 95.53 11:52:01 7 6.06 0.00 0.63 0.00 0.03 93.27 11:53:01 all 1.20 0.00 0.23 0.02 0.03 98.51 11:53:01 0 0.93 0.00 0.20 0.05 0.03 98.78 11:53:01 1 1.05 0.00 0.18 0.03 0.05 98.68 11:53:01 2 0.79 0.00 0.20 0.00 0.05 98.96 11:53:01 3 1.52 0.00 0.27 0.02 0.03 98.16 11:53:01 4 1.67 0.00 0.22 0.00 0.02 98.10 11:53:01 5 1.42 0.00 0.28 0.03 0.03 98.23 11:53:01 6 0.72 0.00 0.23 0.02 0.02 99.02 11:53:01 7 1.54 0.00 0.28 0.02 0.03 98.13 11:54:01 all 2.53 0.00 0.66 0.14 0.03 96.64 11:54:01 0 0.95 0.00 0.53 0.02 0.03 98.46 11:54:01 1 6.98 0.00 0.75 0.08 0.03 92.16 11:54:01 2 4.50 0.00 0.87 0.58 0.03 94.02 11:54:01 3 1.37 0.00 0.65 0.15 0.03 97.79 11:54:01 4 2.04 0.00 0.72 0.08 0.03 97.12 11:54:01 5 1.54 0.00 0.45 0.08 0.03 97.89 11:54:01 6 1.62 0.00 0.50 0.08 0.02 97.78 11:54:01 7 1.22 0.00 0.78 0.07 0.02 97.91 Average: all 7.77 0.00 1.68 2.50 0.04 88.01 Average: 0 8.78 0.00 1.88 2.22 0.04 87.08 Average: 1 9.94 0.00 1.87 1.79 0.05 86.35 Average: 2 7.58 0.00 1.52 0.55 0.04 90.31 Average: 3 6.91 0.00 1.48 1.96 0.04 89.61 Average: 4 7.35 0.00 1.59 0.94 0.04 90.07 Average: 5 6.64 0.00 1.63 2.28 0.04 89.42 Average: 6 6.38 0.00 1.84 6.96 0.04 84.77 Average: 7 8.55 0.00 1.65 3.29 0.04 86.46