Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-22339 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-CAtp37EtON28/agent.2152 SSH_AGENT_PID=2154 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_2074457782284117933.key (/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_2074457782284117933.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8b99874d0fe646f509546f6b38b185b8f089ba50 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8b99874d0fe646f509546f6b38b185b8f089ba50 # timeout=30 Commit message: "Add missing delete composition in CSIT" > git rev-list --no-walk ed38a50541249063daf2cfb00b312fb173adeace # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins14701974237605147719.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-T0yX lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-T0yX/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-T0yX/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.39 botocore==1.38.39 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.0 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh /tmp/jenkins13639674174047128011.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh -xe /tmp/jenkins4870419512743719961.sh + /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/run-project-csit.sh policy-opa-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 60.2M 100 60.2M 0 0 72.4M 0 --:--:-- --:--:-- --:--:-- 72.4M Setting project configuration for: policy-opa-pdp Configuring docker compose... Starting opa-pdp using postgres + Grafana/Prometheus policy-db-migrator Pulling kafka Pulling prometheus Pulling pap Pulling grafana Pulling opa-pdp Pulling api Pulling zookeeper Pulling postgres Pulling f90c8eb4724c Pulling fs layer 2b1b549e99de Pulling fs layer 547372ea8ffa Pulling fs layer 65d25c0f02f3 Pulling fs layer 90dd78f85976 Pulling fs layer 4f4fb700ef54 Pulling fs layer 65d25c0f02f3 Waiting 90dd78f85976 Waiting 4f4fb700ef54 Waiting da9db072f522 Pulling fs layer 6d64908bb8c7 Pulling fs layer 739d956095f0 Pulling fs layer 6ce075c32df1 Pulling fs layer 123d8160bc76 Pulling fs layer 6ff3b4b08cc9 Pulling fs layer be48959ad93c Pulling fs layer da9db072f522 Waiting c70684a5e2f9 Pulling fs layer 739d956095f0 Waiting 6ce075c32df1 Waiting 123d8160bc76 Waiting be48959ad93c Waiting 6d64908bb8c7 Waiting 6ff3b4b08cc9 Waiting f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 547372ea8ffa Downloading [> ] 130kB/12.63MB 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer f3b09c502777 Waiting 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 408012a7b118 Waiting 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 7df673c7455d Pulling fs layer 44986281b8b9 Waiting bf70c5107ab5 Waiting 7221d93db8a9 Waiting 1ccde423731d Waiting 7df673c7455d Waiting 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 55f2b468da67 Waiting 82bfc142787e Waiting 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer c0c90eeb8aca Waiting 5cfb27c10ea5 Waiting e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 356f5c2c843b Pulling fs layer 40a5eed61bb0 Waiting e040ea11fa10 Waiting 09d5a3f70313 Waiting 356f5c2c843b Waiting f18232174bc9 Pulling fs layer 9183b65e90ee Pulling fs layer 3f8d5c908dcc Pulling fs layer 30bb92ff0608 Pulling fs layer 807a2e881ecd Pulling fs layer 4a4d0948b0bf Pulling fs layer 04f6155c873d Pulling fs layer 85dde7dceb0a Pulling fs layer 7009d5001b77 Pulling fs layer 538deb30e80c Pulling fs layer 3f8d5c908dcc Waiting 30bb92ff0608 Waiting 807a2e881ecd Waiting f18232174bc9 Waiting 4a4d0948b0bf Waiting 04f6155c873d Waiting 7009d5001b77 Waiting 85dde7dceb0a Waiting 538deb30e80c Waiting 9183b65e90ee Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 46eab5b44a35 Waiting 531ee2cf3c0c Waiting 2d429b9e73a6 Waiting 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer c4d302cc468d Waiting ed54a7dee1d8 Waiting 12c5c803443f Waiting e73cb4a42719 Pulling fs layer e27c75a98748 Waiting a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer e73cb4a42719 Waiting 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer a83b68436f09 Waiting 7e568a0dc8fb Pulling fs layer 787d6bee9571 Waiting 7e568a0dc8fb Waiting 4b82842ab819 Waiting 2b1b549e99de Verifying Checksum 2b1b549e99de Download complete 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB f90c8eb4724c Downloading [===============> ] 9.338MB/30.59MB 547372ea8ffa Downloading [=======================================> ] 10.09MB/12.63MB 547372ea8ffa Verifying Checksum 547372ea8ffa Download complete 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 65d25c0f02f3 Downloading [=============> ] 7.962MB/28.98MB f90c8eb4724c Downloading [===================================> ] 21.48MB/30.59MB 90dd78f85976 Downloading [=======> ] 6.389MB/41.49MB f90c8eb4724c Verifying Checksum f90c8eb4724c Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer eca0188f477e Waiting f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 45fd2fec8a19 Waiting 2e8a7df9c2ee Pulling fs layer 8f10199ed94b Waiting 10f05dd8b1db Pulling fs layer f963a77d2726 Waiting e444bcd4d577 Waiting 41dac8b43ba6 Pulling fs layer f3a82e9f1761 Waiting 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer 9c266ba63f51 Waiting 2e8a7df9c2ee Waiting c955f6e31a04 Pulling fs layer 10f05dd8b1db Waiting 41dac8b43ba6 Waiting da3ed5db7103 Waiting c955f6e31a04 Waiting 71a9f6a9ab4d Waiting 79161a3f5362 Waiting 4f4fb700ef54 Downloading [==================================================>] 32B/32B 4f4fb700ef54 Verifying Checksum 4f4fb700ef54 Download complete 65d25c0f02f3 Downloading [===================================> ] 20.35MB/28.98MB da9db072f522 Downloading [> ] 48.06kB/3.624MB 65d25c0f02f3 Verifying Checksum 65d25c0f02f3 Download complete da9db072f522 Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB 739d956095f0 Downloading [> ] 146.4kB/14.64MB 90dd78f85976 Downloading [===================> ] 16.19MB/41.49MB 6d64908bb8c7 Downloading [> ] 539.6kB/71.86MB f90c8eb4724c Extracting [> ] 327.7kB/30.59MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB 739d956095f0 Downloading [==================> ] 5.307MB/14.64MB 90dd78f85976 Downloading [====================================> ] 30.24MB/41.49MB 6d64908bb8c7 Downloading [====> ] 6.487MB/71.86MB f90c8eb4724c Extracting [=======> ] 4.588MB/30.59MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 90dd78f85976 Verifying Checksum 90dd78f85976 Download complete 6ce075c32df1 Downloading [==================================================>] 1.071kB/1.071kB 6ce075c32df1 Verifying Checksum 6ce075c32df1 Download complete 739d956095f0 Downloading [==============================================> ] 13.56MB/14.64MB 123d8160bc76 Downloading [============================> ] 3.003kB/5.239kB 123d8160bc76 Download complete 6d64908bb8c7 Downloading [============> ] 17.3MB/71.86MB 739d956095f0 Verifying Checksum 739d956095f0 Download complete f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 6ff3b4b08cc9 Downloading [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Verifying Checksum 6ff3b4b08cc9 Download complete be48959ad93c Downloading [==================================================>] 1.033kB/1.033kB be48959ad93c Verifying Checksum be48959ad93c Download complete da9db072f522 Pull complete c70684a5e2f9 Downloading [=======> ] 3.002kB/19.52kB c70684a5e2f9 Downloading [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Verifying Checksum c70684a5e2f9 Download complete 9fa9226be034 Downloading [> ] 15.3kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Verifying Checksum 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Download complete da9db072f522 Already exists 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer da9db072f522 Already exists 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 96e38c8865ba Waiting 684be6598fc9 Waiting 5e06c6bed798 Waiting dcc0c3b2850c Waiting 0d92cad902ba Waiting 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 6d64908bb8c7 Downloading [====================> ] 29.74MB/71.86MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB f90c8eb4724c Extracting [==================> ] 11.47MB/30.59MB 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer d3165a332ae3 Waiting 96e38c8865ba Waiting 1ec5fb03eaee Waiting 6394804c2196 Waiting e5d7009d9e55 Waiting c124ba1a8b26 Waiting 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 9fa9226be034 Extracting [==================================================>] 783kB/783kB 6ac0e4adf315 Downloading [====> ] 5.406MB/62.07MB 6d64908bb8c7 Downloading [=============================> ] 42.71MB/71.86MB f3b09c502777 Downloading [======> ] 7.028MB/56.52MB f90c8eb4724c Extracting [==========================> ] 16.38MB/30.59MB 9fa9226be034 Pull complete 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 6d64908bb8c7 Downloading [=========================================> ] 59.47MB/71.86MB 6ac0e4adf315 Downloading [==========> ] 13.52MB/62.07MB f3b09c502777 Downloading [================> ] 18.38MB/56.52MB f90c8eb4724c Extracting [================================> ] 19.99MB/30.59MB 1617e25568b2 Extracting [=====================================> ] 360.4kB/480.9kB 6d64908bb8c7 Verifying Checksum 6d64908bb8c7 Download complete 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 6ac0e4adf315 Downloading [==================> ] 22.71MB/62.07MB f3b09c502777 Downloading [===========================> ] 31.36MB/56.52MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB f90c8eb4724c Extracting [====================================> ] 22.61MB/30.59MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB bf70c5107ab5 Download complete 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Verifying Checksum 1ccde423731d Download complete 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete 7df673c7455d Downloading [==================================================>] 694B/694B 6d64908bb8c7 Extracting [> ] 557.1kB/71.86MB 7df673c7455d Verifying Checksum 7df673c7455d Download complete 6ac0e4adf315 Downloading [============================> ] 35.68MB/62.07MB f3b09c502777 Downloading [========================================> ] 45.96MB/56.52MB 1617e25568b2 Pull complete 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB f3b09c502777 Verifying Checksum f3b09c502777 Download complete 6ac0e4adf315 Downloading [========================================> ] 50.28MB/62.07MB 6d64908bb8c7 Extracting [===> ] 4.456MB/71.86MB 1e017ebebdbd Downloading [======> ] 4.898MB/37.19MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 6ac0e4adf315 Verifying Checksum 6ac0e4adf315 Download complete f90c8eb4724c Extracting [===============================================> ] 28.84MB/30.59MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 6d64908bb8c7 Extracting [======> ] 8.913MB/71.86MB 1e017ebebdbd Downloading [===================> ] 14.32MB/37.19MB 55f2b468da67 Downloading [=> ] 7.028MB/257.9MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 82bfc142787e Downloading [============================================> ] 7.667MB/8.613MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 6d64908bb8c7 Extracting [========> ] 11.7MB/71.86MB 1e017ebebdbd Downloading [==================================> ] 25.62MB/37.19MB f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Download complete 55f2b468da67 Downloading [===> ] 18.92MB/257.9MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB f90c8eb4724c Pull complete 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 1e017ebebdbd Downloading [=================================================> ] 36.55MB/37.19MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete 55f2b468da67 Downloading [======> ] 32.44MB/257.9MB 6d64908bb8c7 Extracting [==========> ] 15.04MB/71.86MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete b0e0ef7895f4 Downloading [========> ] 6.028MB/37.01MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 6ac0e4adf315 Extracting [=====> ] 7.242MB/62.07MB 2b1b549e99de Extracting [=============> ] 688.1kB/2.646MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 55f2b468da67 Downloading [========> ] 45.96MB/257.9MB 6d64908bb8c7 Extracting [============> ] 18.38MB/71.86MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB b0e0ef7895f4 Downloading [==================> ] 13.94MB/37.01MB 6ac0e4adf315 Extracting [========> ] 10.03MB/62.07MB 2b1b549e99de Pull complete 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 55f2b468da67 Downloading [===========> ] 57.31MB/257.9MB 09d5a3f70313 Downloading [=> ] 3.784MB/109.2MB 6d64908bb8c7 Extracting [===============> ] 22.84MB/71.86MB 1e017ebebdbd Extracting [=====> ] 4.325MB/37.19MB b0e0ef7895f4 Downloading [=============================> ] 21.48MB/37.01MB 6ac0e4adf315 Extracting [==========> ] 13.37MB/62.07MB 547372ea8ffa Extracting [==> ] 524.3kB/12.63MB 55f2b468da67 Downloading [==============> ] 74.61MB/257.9MB 09d5a3f70313 Downloading [===> ] 8.65MB/109.2MB 1e017ebebdbd Extracting [=========> ] 7.078MB/37.19MB 6d64908bb8c7 Extracting [===================> ] 28.41MB/71.86MB b0e0ef7895f4 Downloading [============================================> ] 32.78MB/37.01MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 547372ea8ffa Extracting [===============> ] 3.932MB/12.63MB 55f2b468da67 Downloading [=================> ] 91.37MB/257.9MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 09d5a3f70313 Downloading [======> ] 14.06MB/109.2MB 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB 6d64908bb8c7 Extracting [======================> ] 32.87MB/71.86MB 6ac0e4adf315 Extracting [=================> ] 22.28MB/62.07MB 547372ea8ffa Extracting [=============================> ] 7.34MB/12.63MB 55f2b468da67 Downloading [====================> ] 104.3MB/257.9MB f18232174bc9 Downloading [==========================> ] 1.916MB/3.642MB 09d5a3f70313 Downloading [==========> ] 23.25MB/109.2MB 1e017ebebdbd Extracting [====================> ] 14.94MB/37.19MB 6d64908bb8c7 Extracting [========================> ] 35.65MB/71.86MB f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 9183b65e90ee Downloading [==================================================>] 141B/141B 9183b65e90ee Verifying Checksum 9183b65e90ee Download complete 547372ea8ffa Extracting [==========================================> ] 10.75MB/12.63MB 55f2b468da67 Downloading [=======================> ] 120MB/257.9MB 3f8d5c908dcc Downloading [> ] 48.06kB/3.524MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB 09d5a3f70313 Downloading [===============> ] 34.06MB/109.2MB 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 6d64908bb8c7 Extracting [===========================> ] 38.99MB/71.86MB 3f8d5c908dcc Downloading [==================================================>] 3.524MB/3.524MB 3f8d5c908dcc Verifying Checksum 3f8d5c908dcc Download complete f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 55f2b468da67 Downloading [==========================> ] 136.2MB/257.9MB 30bb92ff0608 Downloading [> ] 97.22kB/8.735MB 6ac0e4adf315 Extracting [======================> ] 28.41MB/62.07MB 09d5a3f70313 Downloading [=====================> ] 47.58MB/109.2MB 547372ea8ffa Pull complete 1e017ebebdbd Extracting [=============================> ] 22.02MB/37.19MB 6d64908bb8c7 Extracting [=============================> ] 42.89MB/71.86MB f18232174bc9 Extracting [===============================================> ] 3.473MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 55f2b468da67 Downloading [============================> ] 149.2MB/257.9MB 30bb92ff0608 Downloading [=======> ] 1.375MB/8.735MB 6ac0e4adf315 Extracting [=========================> ] 31.75MB/62.07MB 09d5a3f70313 Downloading [============================> ] 62.18MB/109.2MB 1e017ebebdbd Extracting [==================================> ] 25.95MB/37.19MB f18232174bc9 Pull complete 9183b65e90ee Extracting [==================================================>] 141B/141B 9183b65e90ee Extracting [==================================================>] 141B/141B 6d64908bb8c7 Extracting [===============================> ] 45.68MB/71.86MB 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 55f2b468da67 Downloading [================================> ] 165.4MB/257.9MB 30bb92ff0608 Downloading [==========================================> ] 7.372MB/8.735MB 09d5a3f70313 Downloading [==================================> ] 75.69MB/109.2MB 30bb92ff0608 Verifying Checksum 30bb92ff0608 Download complete 6ac0e4adf315 Extracting [===================================> ] 43.45MB/62.07MB 807a2e881ecd Downloading [==> ] 3.01kB/58.07kB 807a2e881ecd Downloading [==================================================>] 58.07kB/58.07kB 807a2e881ecd Verifying Checksum 807a2e881ecd Download complete 1e017ebebdbd Extracting [========================================> ] 29.88MB/37.19MB 4a4d0948b0bf Downloading [=====> ] 3.01kB/27.78kB 4a4d0948b0bf Downloading [==================================================>] 27.78kB/27.78kB 4a4d0948b0bf Verifying Checksum 4a4d0948b0bf Download complete 04f6155c873d Downloading [> ] 539.6kB/107.3MB 6d64908bb8c7 Extracting [=================================> ] 47.91MB/71.86MB 65d25c0f02f3 Extracting [=======> ] 4.129MB/28.98MB 55f2b468da67 Downloading [==================================> ] 179MB/257.9MB 9183b65e90ee Pull complete 09d5a3f70313 Downloading [=======================================> ] 86.51MB/109.2MB 3f8d5c908dcc Extracting [> ] 65.54kB/3.524MB 6ac0e4adf315 Extracting [===========================================> ] 54.03MB/62.07MB 1e017ebebdbd Extracting [============================================> ] 33.42MB/37.19MB 04f6155c873d Downloading [===> ] 8.109MB/107.3MB 55f2b468da67 Downloading [=====================================> ] 193MB/257.9MB 65d25c0f02f3 Extracting [==========> ] 6.193MB/28.98MB 6d64908bb8c7 Extracting [===================================> ] 51.25MB/71.86MB 09d5a3f70313 Downloading [=============================================> ] 98.4MB/109.2MB 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB 3f8d5c908dcc Extracting [====> ] 327.7kB/3.524MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 04f6155c873d Downloading [=======> ] 16.76MB/107.3MB 55f2b468da67 Downloading [========================================> ] 208.2MB/257.9MB 65d25c0f02f3 Extracting [==============> ] 8.552MB/28.98MB 6d64908bb8c7 Extracting [=====================================> ] 53.48MB/71.86MB 85dde7dceb0a Downloading [> ] 539.6kB/63.48MB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 1e017ebebdbd Extracting [=================================================> ] 36.96MB/37.19MB 3f8d5c908dcc Extracting [==================================================>] 3.524MB/3.524MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 04f6155c873d Downloading [============> ] 27.03MB/107.3MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 55f2b468da67 Downloading [==========================================> ] 219MB/257.9MB 65d25c0f02f3 Extracting [==================> ] 10.62MB/28.98MB 6d64908bb8c7 Extracting [======================================> ] 55.71MB/71.86MB 85dde7dceb0a Downloading [===> ] 4.865MB/63.48MB 04f6155c873d Downloading [===================> ] 41.63MB/107.3MB 55f2b468da67 Downloading [=============================================> ] 233MB/257.9MB 65d25c0f02f3 Extracting [=======================> ] 13.86MB/28.98MB 85dde7dceb0a Downloading [==========> ] 13.52MB/63.48MB 6d64908bb8c7 Extracting [========================================> ] 58.49MB/71.86MB 04f6155c873d Downloading [==========================> ] 56.23MB/107.3MB 55f2b468da67 Downloading [================================================> ] 249.2MB/257.9MB 65d25c0f02f3 Extracting [==============================> ] 17.69MB/28.98MB 6d64908bb8c7 Extracting [==========================================> ] 61.28MB/71.86MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 85dde7dceb0a Downloading [=================> ] 22.17MB/63.48MB 7009d5001b77 Downloading [============> ] 3.01kB/11.92kB 7009d5001b77 Downloading [==================================================>] 11.92kB/11.92kB 7009d5001b77 Verifying Checksum 7009d5001b77 Download complete 04f6155c873d Downloading [=============================> ] 63.8MB/107.3MB 1e017ebebdbd Pull complete 538deb30e80c Downloading [==================================================>] 1.225kB/1.225kB 538deb30e80c Verifying Checksum 538deb30e80c Download complete 3f8d5c908dcc Pull complete 6ac0e4adf315 Pull complete 30bb92ff0608 Extracting [> ] 98.3kB/8.735MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 65d25c0f02f3 Extracting [======================================> ] 22.12MB/28.98MB 6d64908bb8c7 Extracting [=============================================> ] 65.73MB/71.86MB 85dde7dceb0a Downloading [===========================> ] 35.14MB/63.48MB 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 04f6155c873d Downloading [===================================> ] 76.23MB/107.3MB 30bb92ff0608 Extracting [==> ] 393.2kB/8.735MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 2d429b9e73a6 Downloading [==========> ] 5.897MB/29.13MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 65d25c0f02f3 Pull complete 6d64908bb8c7 Extracting [================================================> ] 69.07MB/71.86MB 85dde7dceb0a Downloading [======================================> ] 49.2MB/63.48MB 04f6155c873d Downloading [===========================================> ] 93.54MB/107.3MB 30bb92ff0608 Extracting [=======================> ] 4.03MB/8.735MB 2d429b9e73a6 Downloading [=====================> ] 12.39MB/29.13MB f3b09c502777 Extracting [===> ] 4.456MB/56.52MB 55f2b468da67 Extracting [==> ] 10.58MB/257.9MB 85dde7dceb0a Verifying Checksum 85dde7dceb0a Download complete 04f6155c873d Verifying Checksum 04f6155c873d Download complete 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB 6d64908bb8c7 Extracting [==================================================>] 71.86MB/71.86MB 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete 90dd78f85976 Extracting [> ] 426kB/41.49MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 30bb92ff0608 Extracting [=====================================> ] 6.586MB/8.735MB 2d429b9e73a6 Downloading [=================================> ] 19.76MB/29.13MB 01e0882c90d9 Verifying Checksum 01e0882c90d9 Download complete f3b09c502777 Extracting [=====> ] 6.685MB/56.52MB 55f2b468da67 Extracting [===> ] 18.94MB/257.9MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete 6d64908bb8c7 Pull complete 739d956095f0 Extracting [> ] 163.8kB/14.64MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 90dd78f85976 Extracting [=====> ] 4.686MB/41.49MB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete 30bb92ff0608 Extracting [==================================================>] 8.735MB/8.735MB e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB a83b68436f09 Verifying Checksum a83b68436f09 Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 531ee2cf3c0c Downloading [=========================================> ] 6.634MB/8.066MB 55f2b468da67 Extracting [====> ] 21.73MB/257.9MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Download complete 30bb92ff0608 Pull complete 807a2e881ecd Extracting [============================> ] 32.77kB/58.07kB 807a2e881ecd Extracting [==================================================>] 58.07kB/58.07kB e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete eca0188f477e Downloading [> ] 375.7kB/37.17MB 739d956095f0 Extracting [=> ] 327.7kB/14.64MB 90dd78f85976 Extracting [==========> ] 8.946MB/41.49MB eabd8714fec9 Downloading [> ] 539.6kB/375MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB e73cb4a42719 Downloading [===> ] 8.65MB/109.1MB f3b09c502777 Extracting [=========> ] 11.14MB/56.52MB eca0188f477e Downloading [=========> ] 7.159MB/37.17MB 739d956095f0 Extracting [==========> ] 3.113MB/14.64MB 90dd78f85976 Extracting [=============> ] 11.5MB/41.49MB 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB eabd8714fec9 Downloading [> ] 5.406MB/375MB 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB e73cb4a42719 Downloading [========> ] 18.92MB/109.1MB f3b09c502777 Extracting [===========> ] 13.37MB/56.52MB eca0188f477e Downloading [===================> ] 14.7MB/37.17MB 739d956095f0 Extracting [=================> ] 5.079MB/14.64MB 55f2b468da67 Extracting [======> ] 31.2MB/257.9MB 807a2e881ecd Pull complete 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB 2d429b9e73a6 Extracting [=======> ] 4.424MB/29.13MB 4a4d0948b0bf Extracting [==================================================>] 27.78kB/27.78kB eabd8714fec9 Downloading [=> ] 9.731MB/375MB 90dd78f85976 Extracting [==================> ] 15.34MB/41.49MB e73cb4a42719 Downloading [===============> ] 32.98MB/109.1MB f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB eca0188f477e Downloading [================================> ] 24.49MB/37.17MB 739d956095f0 Extracting [=====================> ] 6.226MB/14.64MB 55f2b468da67 Extracting [=======> ] 38.44MB/257.9MB 2d429b9e73a6 Extracting [============> ] 7.078MB/29.13MB eabd8714fec9 Downloading [==> ] 15.14MB/375MB 90dd78f85976 Extracting [=====================> ] 17.89MB/41.49MB e73cb4a42719 Downloading [=====================> ] 47.04MB/109.1MB eca0188f477e Downloading [=================================================> ] 36.55MB/37.17MB f3b09c502777 Extracting [================> ] 18.38MB/56.52MB eca0188f477e Verifying Checksum eca0188f477e Download complete 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete 739d956095f0 Extracting [===========================> ] 8.192MB/14.64MB 4a4d0948b0bf Pull complete 2d429b9e73a6 Extracting [================> ] 9.437MB/29.13MB 55f2b468da67 Extracting [========> ] 46.24MB/257.9MB eabd8714fec9 Downloading [===> ] 24.33MB/375MB 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 90dd78f85976 Extracting [========================> ] 20.02MB/41.49MB e73cb4a42719 Downloading [===========================> ] 60.55MB/109.1MB eca0188f477e Extracting [> ] 393.2kB/37.17MB 739d956095f0 Extracting [==============================> ] 8.847MB/14.64MB 55f2b468da67 Extracting [=========> ] 51.25MB/257.9MB 8f10199ed94b Downloading [============================> ] 4.914MB/8.768MB eabd8714fec9 Downloading [====> ] 31.9MB/375MB 2d429b9e73a6 Extracting [==================> ] 10.91MB/29.13MB 90dd78f85976 Extracting [=================================> ] 28.11MB/41.49MB f3b09c502777 Extracting [=================> ] 20.05MB/56.52MB e73cb4a42719 Downloading [=================================> ] 72.99MB/109.1MB 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete 04f6155c873d Extracting [> ] 557.1kB/107.3MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Downloading [==================================================>] 21.44kB/21.44kB f963a77d2726 Verifying Checksum f963a77d2726 Download complete eca0188f477e Extracting [===> ] 2.753MB/37.17MB 55f2b468da67 Extracting [==========> ] 56.26MB/257.9MB eabd8714fec9 Downloading [=====> ] 41.63MB/375MB 739d956095f0 Extracting [======================================> ] 11.3MB/14.64MB 2d429b9e73a6 Extracting [======================> ] 13.27MB/29.13MB f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 90dd78f85976 Extracting [======================================> ] 31.95MB/41.49MB f3b09c502777 Extracting [===================> ] 22.28MB/56.52MB e73cb4a42719 Downloading [=====================================> ] 82.72MB/109.1MB eca0188f477e Extracting [======> ] 5.112MB/37.17MB 04f6155c873d Extracting [=> ] 2.785MB/107.3MB 55f2b468da67 Extracting [============> ] 62.39MB/257.9MB eabd8714fec9 Downloading [======> ] 49.74MB/375MB 739d956095f0 Extracting [=========================================> ] 12.12MB/14.64MB 2d429b9e73a6 Extracting [=========================> ] 15.04MB/29.13MB f3a82e9f1761 Downloading [======> ] 5.963MB/44.41MB f3b09c502777 Extracting [=====================> ] 24.51MB/56.52MB e73cb4a42719 Downloading [==========================================> ] 92.45MB/109.1MB 90dd78f85976 Extracting [=========================================> ] 34.5MB/41.49MB 55f2b468da67 Extracting [============> ] 66.29MB/257.9MB 739d956095f0 Extracting [==================================================>] 14.64MB/14.64MB 04f6155c873d Extracting [==> ] 5.014MB/107.3MB eca0188f477e Extracting [========> ] 6.685MB/37.17MB eabd8714fec9 Downloading [=======> ] 56.77MB/375MB 2d429b9e73a6 Extracting [==============================> ] 17.69MB/29.13MB f3a82e9f1761 Downloading [===============> ] 13.76MB/44.41MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB e73cb4a42719 Downloading [===============================================> ] 103.8MB/109.1MB 90dd78f85976 Extracting [=============================================> ] 37.49MB/41.49MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 739d956095f0 Pull complete 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB 55f2b468da67 Extracting [=============> ] 71.86MB/257.9MB 6ce075c32df1 Extracting [==================================================>] 1.071kB/1.071kB eabd8714fec9 Downloading [=========> ] 68.12MB/375MB 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete 04f6155c873d Extracting [===> ] 7.242MB/107.3MB f3a82e9f1761 Downloading [=======================> ] 21.1MB/44.41MB 2d429b9e73a6 Extracting [===================================> ] 20.64MB/29.13MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete eca0188f477e Extracting [=============> ] 9.83MB/37.17MB f3b09c502777 Extracting [=========================> ] 28.41MB/56.52MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete 90dd78f85976 Extracting [================================================> ] 40.47MB/41.49MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete eabd8714fec9 Downloading [==========> ] 79.48MB/375MB 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete 55f2b468da67 Extracting [===============> ] 79.1MB/257.9MB f3a82e9f1761 Downloading [=====================================> ] 33.03MB/44.41MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 2d429b9e73a6 Extracting [======================================> ] 22.71MB/29.13MB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete eca0188f477e Extracting [=================> ] 12.98MB/37.17MB f3b09c502777 Extracting [==================================> ] 38.99MB/56.52MB 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 04f6155c873d Extracting [====> ] 10.58MB/107.3MB 6ce075c32df1 Pull complete da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB 123d8160bc76 Extracting [==================================================>] 5.239kB/5.239kB eabd8714fec9 Downloading [===========> ] 87.05MB/375MB 90dd78f85976 Pull complete 4f4fb700ef54 Extracting [==================================================>] 32B/32B 4f4fb700ef54 Extracting [==================================================>] 32B/32B 55f2b468da67 Extracting [================> ] 84.67MB/257.9MB f3a82e9f1761 Downloading [================================================> ] 42.66MB/44.41MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete 2d429b9e73a6 Extracting [==========================================> ] 24.48MB/29.13MB eca0188f477e Extracting [====================> ] 15.34MB/37.17MB f3b09c502777 Extracting [=======================================> ] 45.12MB/56.52MB c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete da3ed5db7103 Downloading [=> ] 4.865MB/127.4MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB eabd8714fec9 Downloading [=============> ] 100.6MB/375MB 04f6155c873d Extracting [======> ] 13.93MB/107.3MB 55f2b468da67 Extracting [=================> ] 90.24MB/257.9MB f3b09c502777 Extracting [==============================================> ] 52.36MB/56.52MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB eca0188f477e Extracting [========================> ] 18.48MB/37.17MB da3ed5db7103 Downloading [====> ] 11.35MB/127.4MB 123d8160bc76 Pull complete 4f4fb700ef54 Pull complete 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 96e38c8865ba Downloading [==> ] 3.243MB/71.91MB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB 6ff3b4b08cc9 Extracting [==================================================>] 1.032kB/1.032kB eabd8714fec9 Downloading [===============> ] 114.1MB/375MB opa-pdp Pulled 55f2b468da67 Extracting [==================> ] 96.93MB/257.9MB 04f6155c873d Extracting [=======> ] 16.15MB/107.3MB 2d429b9e73a6 Extracting [==============================================> ] 26.84MB/29.13MB eca0188f477e Extracting [============================> ] 21.23MB/37.17MB f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB da3ed5db7103 Downloading [=======> ] 18.38MB/127.4MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB 96e38c8865ba Downloading [======> ] 8.65MB/71.91MB eabd8714fec9 Downloading [================> ] 122.2MB/375MB 55f2b468da67 Extracting [====================> ] 103.6MB/257.9MB eca0188f477e Extracting [================================> ] 24.38MB/37.17MB 2d429b9e73a6 Extracting [================================================> ] 28.02MB/29.13MB 04f6155c873d Extracting [========> ] 17.27MB/107.3MB da3ed5db7103 Downloading [=========> ] 25.41MB/127.4MB 6ff3b4b08cc9 Pull complete 96e38c8865ba Downloading [=========> ] 13.52MB/71.91MB 96e38c8865ba Downloading [=========> ] 13.52MB/71.91MB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB eabd8714fec9 Downloading [=================> ] 133MB/375MB be48959ad93c Extracting [==================================================>] 1.033kB/1.033kB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 55f2b468da67 Extracting [====================> ] 107.5MB/257.9MB eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB da3ed5db7103 Downloading [==============> ] 36.76MB/127.4MB f3b09c502777 Pull complete 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB 96e38c8865ba Downloading [============> ] 18.38MB/71.91MB eabd8714fec9 Downloading [==================> ] 141.7MB/375MB 408012a7b118 Extracting [==================================================>] 637B/637B 04f6155c873d Extracting [========> ] 17.83MB/107.3MB 408012a7b118 Extracting [==================================================>] 637B/637B 55f2b468da67 Extracting [=====================> ] 110.9MB/257.9MB be48959ad93c Pull complete c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB c70684a5e2f9 Extracting [==================================================>] 19.52kB/19.52kB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB eca0188f477e Extracting [=======================================> ] 29.1MB/37.17MB da3ed5db7103 Downloading [===================> ] 50.82MB/127.4MB 96e38c8865ba Downloading [=================> ] 24.87MB/71.91MB 96e38c8865ba Downloading [=================> ] 24.87MB/71.91MB eabd8714fec9 Downloading [====================> ] 155.7MB/375MB 04f6155c873d Extracting [=========> ] 19.5MB/107.3MB 55f2b468da67 Extracting [======================> ] 113.6MB/257.9MB 2d429b9e73a6 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 408012a7b118 Pull complete 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB da3ed5db7103 Downloading [=========================> ] 64.34MB/127.4MB eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB eabd8714fec9 Downloading [======================> ] 170.9MB/375MB 04f6155c873d Extracting [==========> ] 22.84MB/107.3MB 55f2b468da67 Extracting [======================> ] 115.9MB/257.9MB da3ed5db7103 Downloading [==============================> ] 76.77MB/127.4MB c70684a5e2f9 Pull complete 96e38c8865ba Downloading [==============================> ] 43.79MB/71.91MB 96e38c8865ba Downloading [==============================> ] 43.79MB/71.91MB eabd8714fec9 Downloading [========================> ] 184.4MB/375MB policy-db-migrator Pulled 04f6155c873d Extracting [============> ] 26.18MB/107.3MB eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 44986281b8b9 Pull complete 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB 55f2b468da67 Extracting [======================> ] 118.1MB/257.9MB da3ed5db7103 Downloading [=================================> ] 86.51MB/127.4MB 96e38c8865ba Downloading [===================================> ] 50.82MB/71.91MB 96e38c8865ba Downloading [===================================> ] 50.82MB/71.91MB eabd8714fec9 Downloading [=========================> ] 190.9MB/375MB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 04f6155c873d Extracting [=============> ] 28.97MB/107.3MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB 96e38c8865ba Downloading [=============================================> ] 64.88MB/71.91MB 96e38c8865ba Downloading [=============================================> ] 64.88MB/71.91MB da3ed5db7103 Downloading [======================================> ] 98.94MB/127.4MB eabd8714fec9 Downloading [===========================> ] 203.3MB/375MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 04f6155c873d Extracting [===============> ] 33.42MB/107.3MB eca0188f477e Extracting [=================================================> ] 36.57MB/37.17MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 96e38c8865ba Download complete 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete eabd8714fec9 Downloading [===========================> ] 209.8MB/375MB da3ed5db7103 Downloading [==========================================> ] 108.7MB/127.4MB c4d302cc468d Extracting [================================> ] 2.949MB/4.534MB 55f2b468da67 Extracting [=======================> ] 123.1MB/257.9MB bf70c5107ab5 Pull complete 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 04f6155c873d Extracting [================> ] 35.65MB/107.3MB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 0d92cad902ba Verifying Checksum 0d92cad902ba Download complete eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB da3ed5db7103 Downloading [===============================================> ] 120.6MB/127.4MB eabd8714fec9 Downloading [=============================> ] 221.1MB/375MB 04f6155c873d Extracting [=================> ] 37.88MB/107.3MB 55f2b468da67 Extracting [========================> ] 125.9MB/257.9MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete c4d302cc468d Pull complete 1ccde423731d Pull complete eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B dcc0c3b2850c Downloading [===> ] 4.865MB/76.12MB 96e38c8865ba Extracting [==> ] 3.342MB/71.91MB 96e38c8865ba Extracting [==> ] 3.342MB/71.91MB eabd8714fec9 Downloading [==============================> ] 230.3MB/375MB e444bcd4d577 Pull complete 55f2b468da67 Extracting [========================> ] 127.6MB/257.9MB 04f6155c873d Extracting [==================> ] 39.55MB/107.3MB 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB dcc0c3b2850c Downloading [=========> ] 14.06MB/76.12MB eabd8714fec9 Downloading [================================> ] 243.3MB/375MB 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 96e38c8865ba Extracting [===> ] 5.571MB/71.91MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B 7df673c7455d Extracting [==================================================>] 694B/694B 55f2b468da67 Extracting [=========================> ] 130.4MB/257.9MB 04f6155c873d Extracting [====================> ] 43.45MB/107.3MB 01e0882c90d9 Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB dcc0c3b2850c Downloading [==============> ] 22.17MB/76.12MB eabd8714fec9 Downloading [==================================> ] 255.2MB/375MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 96e38c8865ba Extracting [======> ] 8.913MB/71.91MB 04f6155c873d Extracting [=====================> ] 45.68MB/107.3MB 55f2b468da67 Extracting [=========================> ] 133.7MB/257.9MB eabd8714fec9 Downloading [===================================> ] 267.6MB/375MB dcc0c3b2850c Downloading [=======================> ] 36.22MB/76.12MB 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 7df673c7455d Pull complete 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB 96e38c8865ba Extracting [========> ] 11.7MB/71.91MB prometheus Pulled e5d7009d9e55 Downloading [==================================================>] 295B/295B e5d7009d9e55 Verifying Checksum e5d7009d9e55 Download complete 04f6155c873d Extracting [======================> ] 47.35MB/107.3MB 55f2b468da67 Extracting [==========================> ] 135.9MB/257.9MB dcc0c3b2850c Downloading [==============================> ] 46.5MB/76.12MB eabd8714fec9 Downloading [====================================> ] 275.7MB/375MB 531ee2cf3c0c Extracting [=========================> ] 4.129MB/8.066MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 96e38c8865ba Extracting [==========> ] 14.48MB/71.91MB 55f2b468da67 Extracting [==========================> ] 138.7MB/257.9MB 04f6155c873d Extracting [=======================> ] 50.14MB/107.3MB eabd8714fec9 Downloading [======================================> ] 291.4MB/375MB dcc0c3b2850c Downloading [=======================================> ] 60.01MB/76.12MB 531ee2cf3c0c Extracting [===================================> ] 5.8MB/8.066MB 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Download complete 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 96e38c8865ba Extracting [===========> ] 16.71MB/71.91MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB eabd8714fec9 Downloading [========================================> ] 301.2MB/375MB dcc0c3b2850c Downloading [================================================> ] 73.53MB/76.12MB 04f6155c873d Extracting [=========================> ] 54.03MB/107.3MB 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum d3165a332ae3 Download complete eabd8714fec9 Downloading [========================================> ] 307.1MB/375MB 55f2b468da67 Extracting [===========================> ] 144.3MB/257.9MB 04f6155c873d Extracting [=========================> ] 55.71MB/107.3MB eabd8714fec9 Downloading [=========================================> ] 312MB/375MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB 96e38c8865ba Extracting [===============> ] 21.73MB/71.91MB 55f2b468da67 Extracting [============================> ] 145.4MB/257.9MB 04f6155c873d Extracting [==========================> ] 57.38MB/107.3MB c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 531ee2cf3c0c Pull complete eabd8714fec9 Downloading [===========================================> ] 323.9MB/375MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 96e38c8865ba Extracting [=================> ] 25.07MB/71.91MB 55f2b468da67 Extracting [============================> ] 148.2MB/257.9MB 04f6155c873d Extracting [============================> ] 61.83MB/107.3MB c124ba1a8b26 Downloading [====> ] 8.109MB/91.87MB eabd8714fec9 Downloading [=============================================> ] 338.5MB/375MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB 96e38c8865ba Extracting [====================> ] 28.97MB/71.91MB 04f6155c873d Extracting [==============================> ] 65.73MB/107.3MB c124ba1a8b26 Downloading [==========> ] 18.92MB/91.87MB 55f2b468da67 Extracting [=============================> ] 151.5MB/257.9MB 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete eabd8714fec9 Downloading [===============================================> ] 354.7MB/375MB c124ba1a8b26 Downloading [==================> ] 33.52MB/91.87MB 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.31MB/71.91MB 04f6155c873d Extracting [===============================> ] 68.52MB/107.3MB 55f2b468da67 Extracting [==============================> ] 154.9MB/257.9MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB eabd8714fec9 Downloading [=================================================> ] 370.9MB/375MB 96e38c8865ba Extracting [========================> ] 35.65MB/71.91MB 96e38c8865ba Extracting [========================> ] 35.65MB/71.91MB 04f6155c873d Extracting [=================================> ] 71.3MB/107.3MB 55f2b468da67 Extracting [==============================> ] 157.6MB/257.9MB eabd8714fec9 Download complete c124ba1a8b26 Downloading [=======================> ] 42.71MB/91.87MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 96e38c8865ba Extracting [==========================> ] 38.44MB/71.91MB 04f6155c873d Extracting [==================================> ] 74.65MB/107.3MB 55f2b468da67 Extracting [===============================> ] 161MB/257.9MB c124ba1a8b26 Downloading [==============================> ] 56.77MB/91.87MB eabd8714fec9 Extracting [> ] 557.1kB/375MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 96e38c8865ba Extracting [============================> ] 41.22MB/71.91MB 04f6155c873d Extracting [====================================> ] 77.99MB/107.3MB 55f2b468da67 Extracting [===============================> ] 164.3MB/257.9MB ed54a7dee1d8 Pull complete c124ba1a8b26 Downloading [====================================> ] 66.5MB/91.87MB eabd8714fec9 Extracting [=> ] 8.356MB/375MB 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 96e38c8865ba Extracting [==============================> ] 44.01MB/71.91MB 04f6155c873d Extracting [=====================================> ] 80.77MB/107.3MB 55f2b468da67 Extracting [================================> ] 167.7MB/257.9MB c124ba1a8b26 Downloading [===========================================> ] 80.56MB/91.87MB eabd8714fec9 Extracting [==> ] 17.27MB/375MB 96e38c8865ba Extracting [================================> ] 47.35MB/71.91MB 96e38c8865ba Extracting [================================> ] 47.35MB/71.91MB 04f6155c873d Extracting [=======================================> ] 85.23MB/107.3MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB eabd8714fec9 Extracting [==> ] 21.17MB/375MB 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 96e38c8865ba Extracting [===================================> ] 50.69MB/71.91MB 04f6155c873d Extracting [=========================================> ] 89.13MB/107.3MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 96e38c8865ba Extracting [=====================================> ] 53.48MB/71.91MB 04f6155c873d Extracting [===========================================> ] 93.03MB/107.3MB 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB eabd8714fec9 Extracting [===> ] 23.95MB/375MB 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 04f6155c873d Extracting [=============================================> ] 98.6MB/107.3MB eabd8714fec9 Extracting [====> ] 31.75MB/375MB eabd8714fec9 Extracting [====> ] 33.98MB/375MB 04f6155c873d Extracting [==============================================> ] 100.3MB/107.3MB 96e38c8865ba Extracting [========================================> ] 58.49MB/71.91MB 96e38c8865ba Extracting [========================================> ] 58.49MB/71.91MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [=====> ] 41.22MB/375MB 04f6155c873d Extracting [================================================> ] 103.1MB/107.3MB eabd8714fec9 Extracting [=======> ] 52.92MB/375MB 04f6155c873d Extracting [================================================> ] 104.7MB/107.3MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 96e38c8865ba Extracting [==========================================> ] 60.72MB/71.91MB 55f2b468da67 Extracting [=================================> ] 174.4MB/257.9MB eabd8714fec9 Extracting [========> ] 64.06MB/375MB 04f6155c873d Extracting [=================================================> ] 106.4MB/107.3MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 04f6155c873d Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB eabd8714fec9 Extracting [==========> ] 75.76MB/375MB 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB 96e38c8865ba Extracting [===============================================> ] 67.96MB/71.91MB eabd8714fec9 Extracting [===========> ] 86.9MB/375MB 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB eabd8714fec9 Extracting [============> ] 96.93MB/375MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 55f2b468da67 Extracting [===================================> ] 181.6MB/257.9MB 55f2b468da67 Extracting [====================================> ] 188.3MB/257.9MB 55f2b468da67 Extracting [=====================================> ] 193.3MB/257.9MB eabd8714fec9 Extracting [=============> ] 99.16MB/375MB 12c5c803443f Pull complete 55f2b468da67 Extracting [=====================================> ] 194.4MB/257.9MB eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB eabd8714fec9 Extracting [==============> ] 107.5MB/375MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB eabd8714fec9 Extracting [==============> ] 108.6MB/375MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [===============> ] 113.1MB/375MB 04f6155c873d Pull complete 55f2b468da67 Extracting [======================================> ] 199.4MB/257.9MB eabd8714fec9 Extracting [===============> ] 115.9MB/375MB 55f2b468da67 Extracting [======================================> ] 201.1MB/257.9MB eabd8714fec9 Extracting [================> ] 121.4MB/375MB eabd8714fec9 Extracting [================> ] 127MB/375MB 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB eabd8714fec9 Extracting [=================> ] 132MB/375MB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB eabd8714fec9 Extracting [==================> ] 137.6MB/375MB 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB 55f2b468da67 Extracting [========================================> ] 210MB/257.9MB eabd8714fec9 Extracting [==================> ] 140.9MB/375MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete eabd8714fec9 Extracting [===================> ] 145.9MB/375MB 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB eabd8714fec9 Extracting [====================> ] 150.4MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB e27c75a98748 Pull complete 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 55f2b468da67 Extracting [=========================================> ] 213.9MB/257.9MB eabd8714fec9 Extracting [====================> ] 154.3MB/375MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 85dde7dceb0a Extracting [> ] 557.1kB/63.48MB e5d7009d9e55 Pull complete 5e06c6bed798 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 55f2b468da67 Extracting [==========================================> ] 217.3MB/257.9MB eabd8714fec9 Extracting [====================> ] 156.5MB/375MB e73cb4a42719 Extracting [=> ] 3.342MB/109.1MB 85dde7dceb0a Extracting [> ] 1.114MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 220MB/257.9MB 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB eabd8714fec9 Extracting [=====================> ] 160.4MB/375MB eabd8714fec9 Extracting [======================> ] 167.7MB/375MB e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB eabd8714fec9 Extracting [========================> ] 181.6MB/375MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [=========================> ] 192.7MB/375MB 55f2b468da67 Extracting [==========================================> ] 221.7MB/257.9MB e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB eabd8714fec9 Extracting [===========================> ] 202.8MB/375MB 85dde7dceb0a Extracting [=> ] 1.671MB/63.48MB 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 1ec5fb03eaee Pull complete e73cb4a42719 Extracting [=====> ] 11.14MB/109.1MB 684be6598fc9 Pull complete eabd8714fec9 Extracting [============================> ] 210MB/375MB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB e73cb4a42719 Extracting [=====> ] 12.81MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 85dde7dceb0a Extracting [=> ] 2.228MB/63.48MB e73cb4a42719 Extracting [======> ] 15.04MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB eabd8714fec9 Extracting [============================> ] 217.3MB/375MB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 55f2b468da67 Extracting [============================================> ] 227.3MB/257.9MB e73cb4a42719 Extracting [=======> ] 16.71MB/109.1MB eabd8714fec9 Extracting [=============================> ] 218.9MB/375MB e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 85dde7dceb0a Extracting [==> ] 2.785MB/63.48MB e73cb4a42719 Extracting [========> ] 19.5MB/109.1MB eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 85dde7dceb0a Extracting [===> ] 3.899MB/63.48MB e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB eabd8714fec9 Extracting [=============================> ] 223.9MB/375MB 55f2b468da67 Extracting [============================================> ] 230.1MB/257.9MB d3165a332ae3 Pull complete eabd8714fec9 Extracting [==============================> ] 227.3MB/375MB 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 85dde7dceb0a Extracting [===> ] 4.456MB/63.48MB eabd8714fec9 Extracting [==============================> ] 230.1MB/375MB 55f2b468da67 Extracting [============================================> ] 231.2MB/257.9MB e73cb4a42719 Extracting [=============> ] 28.41MB/109.1MB 85dde7dceb0a Extracting [===> ] 5.014MB/63.48MB eabd8714fec9 Extracting [===============================> ] 233.4MB/375MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB e73cb4a42719 Extracting [===============> ] 32.87MB/109.1MB 0d92cad902ba Pull complete eabd8714fec9 Extracting [===============================> ] 236.2MB/375MB c124ba1a8b26 Extracting [====> ] 7.799MB/91.87MB c124ba1a8b26 Extracting [=====> ] 9.47MB/91.87MB 55f2b468da67 Extracting [=============================================> ] 234MB/257.9MB c124ba1a8b26 Extracting [=====> ] 10.03MB/91.87MB e73cb4a42719 Extracting [================> ] 35.09MB/109.1MB eabd8714fec9 Extracting [===============================> ] 237.3MB/375MB 85dde7dceb0a Extracting [======> ] 7.799MB/63.48MB eabd8714fec9 Extracting [================================> ] 241.2MB/375MB c124ba1a8b26 Extracting [=========> ] 17.27MB/91.87MB e73cb4a42719 Extracting [=================> ] 37.88MB/109.1MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 85dde7dceb0a Extracting [=======> ] 9.47MB/63.48MB eabd8714fec9 Extracting [================================> ] 243.4MB/375MB c124ba1a8b26 Extracting [=============> ] 23.95MB/91.87MB e73cb4a42719 Extracting [==================> ] 41.22MB/109.1MB 55f2b468da67 Extracting [==============================================> ] 238.4MB/257.9MB dcc0c3b2850c Extracting [=====> ] 8.913MB/76.12MB c124ba1a8b26 Extracting [================> ] 31.2MB/91.87MB 85dde7dceb0a Extracting [=========> ] 11.7MB/63.48MB eabd8714fec9 Extracting [================================> ] 245.7MB/375MB e73cb4a42719 Extracting [===================> ] 42.89MB/109.1MB dcc0c3b2850c Extracting [==========> ] 15.6MB/76.12MB 55f2b468da67 Extracting [==============================================> ] 241.2MB/257.9MB c124ba1a8b26 Extracting [======================> ] 41.22MB/91.87MB 85dde7dceb0a Extracting [==========> ] 12.81MB/63.48MB e73cb4a42719 Extracting [====================> ] 45.68MB/109.1MB eabd8714fec9 Extracting [=================================> ] 248.4MB/375MB dcc0c3b2850c Extracting [===============> ] 23.4MB/76.12MB c124ba1a8b26 Extracting [==========================> ] 48.46MB/91.87MB 85dde7dceb0a Extracting [===========> ] 15.04MB/63.48MB eabd8714fec9 Extracting [=================================> ] 251.2MB/375MB dcc0c3b2850c Extracting [====================> ] 30.64MB/76.12MB e73cb4a42719 Extracting [======================> ] 49.58MB/109.1MB c124ba1a8b26 Extracting [===============================> ] 57.38MB/91.87MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB eabd8714fec9 Extracting [=================================> ] 253.5MB/375MB dcc0c3b2850c Extracting [========================> ] 37.32MB/76.12MB e73cb4a42719 Extracting [=======================> ] 51.25MB/109.1MB 85dde7dceb0a Extracting [=============> ] 16.71MB/63.48MB c124ba1a8b26 Extracting [====================================> ] 66.85MB/91.87MB 55f2b468da67 Extracting [================================================> ] 251.2MB/257.9MB eabd8714fec9 Extracting [==================================> ] 256.8MB/375MB dcc0c3b2850c Extracting [============================> ] 44.01MB/76.12MB e73cb4a42719 Extracting [========================> ] 52.36MB/109.1MB c124ba1a8b26 Extracting [========================================> ] 74.65MB/91.87MB 85dde7dceb0a Extracting [==============> ] 18.38MB/63.48MB dcc0c3b2850c Extracting [================================> ] 50.14MB/76.12MB eabd8714fec9 Extracting [==================================> ] 259.6MB/375MB 55f2b468da67 Extracting [=================================================> ] 254MB/257.9MB e73cb4a42719 Extracting [========================> ] 54.03MB/109.1MB c124ba1a8b26 Extracting [============================================> ] 81.33MB/91.87MB 85dde7dceb0a Extracting [================> ] 20.61MB/63.48MB dcc0c3b2850c Extracting [======================================> ] 57.93MB/76.12MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB 55f2b468da67 Extracting [=================================================> ] 257.4MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB e73cb4a42719 Extracting [=========================> ] 56.26MB/109.1MB c124ba1a8b26 Extracting [===============================================> ] 88.01MB/91.87MB 85dde7dceb0a Extracting [=================> ] 22.84MB/63.48MB dcc0c3b2850c Extracting [==========================================> ] 65.18MB/76.12MB eabd8714fec9 Extracting [===================================> ] 264.6MB/375MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB e73cb4a42719 Extracting [===========================> ] 59.6MB/109.1MB dcc0c3b2850c Extracting [==============================================> ] 70.19MB/76.12MB 85dde7dceb0a Extracting [==================> ] 23.95MB/63.48MB eabd8714fec9 Extracting [===================================> ] 266.3MB/375MB e73cb4a42719 Extracting [===========================> ] 60.72MB/109.1MB dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 85dde7dceb0a Extracting [=====================> ] 27.3MB/63.48MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB e73cb4a42719 Extracting [==============================> ] 67.4MB/109.1MB 85dde7dceb0a Extracting [======================> ] 28.41MB/63.48MB c124ba1a8b26 Pull complete e73cb4a42719 Extracting [================================> ] 70.75MB/109.1MB e73cb4a42719 Extracting [================================> ] 71.3MB/109.1MB 85dde7dceb0a Extracting [========================> ] 30.64MB/63.48MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB 85dde7dceb0a Extracting [=========================> ] 31.75MB/63.48MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB e73cb4a42719 Extracting [==================================> ] 75.2MB/109.1MB eabd8714fec9 Extracting [====================================> ] 272.4MB/375MB e73cb4a42719 Extracting [====================================> ] 79.1MB/109.1MB 85dde7dceb0a Extracting [===========================> ] 34.54MB/63.48MB e73cb4a42719 Extracting [======================================> ] 83.56MB/109.1MB 85dde7dceb0a Extracting [=============================> ] 37.88MB/63.48MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB 85dde7dceb0a Extracting [================================> ] 40.67MB/63.48MB e73cb4a42719 Extracting [========================================> ] 88.57MB/109.1MB eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 85dde7dceb0a Extracting [==================================> ] 43.45MB/63.48MB e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB eabd8714fec9 Extracting [=====================================> ] 278MB/375MB dcc0c3b2850c Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 85dde7dceb0a Extracting [===================================> ] 45.68MB/63.48MB eabd8714fec9 Extracting [=====================================> ] 279.6MB/375MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 85dde7dceb0a Extracting [======================================> ] 48.46MB/63.48MB eabd8714fec9 Extracting [=====================================> ] 283MB/375MB e73cb4a42719 Extracting [============================================> ] 96.93MB/109.1MB 85dde7dceb0a Extracting [========================================> ] 51.25MB/63.48MB eabd8714fec9 Extracting [======================================> ] 287.4MB/375MB e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 85dde7dceb0a Extracting [==========================================> ] 53.48MB/63.48MB eabd8714fec9 Extracting [======================================> ] 290.8MB/375MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 55f2b468da67 Pull complete e73cb4a42719 Extracting [===============================================> ] 103.1MB/109.1MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 85dde7dceb0a Extracting [==============================================> ] 59.05MB/63.48MB 6394804c2196 Pull complete 82bfc142787e Extracting [> ] 98.3kB/8.613MB e73cb4a42719 Extracting [================================================> ] 104.7MB/109.1MB 82bfc142787e Extracting [====> ] 786.4kB/8.613MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 82bfc142787e Extracting [======================> ] 3.834MB/8.613MB 82bfc142787e Extracting [======================> ] 3.932MB/8.613MB 85dde7dceb0a Extracting [==============================================> ] 59.6MB/63.48MB 85dde7dceb0a Extracting [===============================================> ] 60.16MB/63.48MB 82bfc142787e Extracting [================================> ] 5.603MB/8.613MB eabd8714fec9 Extracting [=======================================> ] 296.9MB/375MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB 82bfc142787e Extracting [=========================================> ] 7.078MB/8.613MB 85dde7dceb0a Extracting [================================================> ] 61.83MB/63.48MB 82bfc142787e Extracting [================================================> ] 8.356MB/8.613MB eabd8714fec9 Extracting [=======================================> ] 298.6MB/375MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB e73cb4a42719 Extracting [=================================================> ] 107MB/109.1MB 85dde7dceb0a Extracting [=================================================> ] 62.95MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB 85dde7dceb0a Extracting [==================================================>] 63.48MB/63.48MB eb7cda286a15 Pull complete eabd8714fec9 Extracting [========================================> ] 300.3MB/375MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB eabd8714fec9 Extracting [========================================> ] 302.5MB/375MB e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [========================================> ] 303MB/375MB eabd8714fec9 Extracting [========================================> ] 305.3MB/375MB eabd8714fec9 Extracting [========================================> ] 306.9MB/375MB eabd8714fec9 Extracting [=========================================> ] 309.7MB/375MB 82bfc142787e Pull complete 85dde7dceb0a Pull complete eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB eabd8714fec9 Extracting [=========================================> ] 314.2MB/375MB eabd8714fec9 Extracting [==========================================> ] 317MB/375MB pap Pulled eabd8714fec9 Extracting [==========================================> ] 320.9MB/375MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB eabd8714fec9 Extracting [============================================> ] 333.1MB/375MB eabd8714fec9 Extracting [=============================================> ] 338.1MB/375MB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB e73cb4a42719 Pull complete api Pulled 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB 7009d5001b77 Extracting [==================================================>] 11.92kB/11.92kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 46baca71a4ef Pull complete a83b68436f09 Pull complete 7009d5001b77 Pull complete 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB 538deb30e80c Extracting [==================================================>] 1.225kB/1.225kB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 787d6bee9571 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B b0e0ef7895f4 Extracting [===========> ] 8.258MB/37.01MB 538deb30e80c Pull complete grafana Pulled 13ff0988aaea Pull complete b0e0ef7895f4 Extracting [====================> ] 14.94MB/37.01MB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB b0e0ef7895f4 Extracting [====================================> ] 27.13MB/37.01MB 4b82842ab819 Pull complete 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB b0e0ef7895f4 Extracting [===============================================> ] 35.39MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 7e568a0dc8fb Pull complete b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB postgres Pulled eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB eabd8714fec9 Extracting [==============================================> ] 347MB/375MB c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B eabd8714fec9 Extracting [===============================================> ] 353.2MB/375MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB e040ea11fa10 Pull complete eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB 09d5a3f70313 Extracting [=======> ] 16.15MB/109.2MB eabd8714fec9 Extracting [=================================================> ] 372.1MB/375MB 09d5a3f70313 Extracting [=============> ] 30.08MB/109.2MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB 09d5a3f70313 Extracting [====================> ] 45.12MB/109.2MB 09d5a3f70313 Extracting [============================> ] 61.28MB/109.2MB eabd8714fec9 Pull complete 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 09d5a3f70313 Extracting [===================================> ] 77.43MB/109.2MB 45fd2fec8a19 Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 09d5a3f70313 Extracting [=========================================> ] 91.36MB/109.2MB 8f10199ed94b Extracting [====> ] 786.4kB/8.768MB 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 09d5a3f70313 Extracting [===============================================> ] 104.7MB/109.2MB 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB f963a77d2726 Pull complete f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 356f5c2c843b Pull complete kafka Pulled f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB f3a82e9f1761 Extracting [==================================> ] 30.74MB/44.41MB f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Pull complete da3ed5db7103 Extracting [> ] 557.1kB/127.4MB da3ed5db7103 Extracting [====> ] 12.26MB/127.4MB da3ed5db7103 Extracting [==========> ] 27.3MB/127.4MB da3ed5db7103 Extracting [=================> ] 43.45MB/127.4MB da3ed5db7103 Extracting [========================> ] 61.83MB/127.4MB da3ed5db7103 Extracting [==============================> ] 78.54MB/127.4MB da3ed5db7103 Extracting [=====================================> ] 95.81MB/127.4MB da3ed5db7103 Extracting [============================================> ] 112.5MB/127.4MB da3ed5db7103 Extracting [===============================================> ] 120.9MB/127.4MB da3ed5db7103 Extracting [=================================================> ] 125.9MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container zookeeper Creating Container postgres Creating Container prometheus Created Container postgres Created Container policy-db-migrator Creating Container grafana Creating Container zookeeper Created Container kafka Creating Container grafana Created Container kafka Created Container policy-db-migrator Created Container policy-api Creating Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-opa-pdp Creating Container policy-opa-pdp Created Container postgres Starting Container zookeeper Starting Container prometheus Starting Container zookeeper Started Container kafka Starting Container kafka Started Container prometheus Started Container grafana Starting Container grafana Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-opa-pdp Starting Container policy-opa-pdp Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 3 minutes for OPA-PDP to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Checking if REST port 30012 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:b9baf3f722ba6586427ed81ae4654fd5525fea32bb8b6ae3c41f97ea478268f6 top - 11:50:39 up 6 min, 0 users, load average: 1.03, 1.23, 0.64 Tasks: 220 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.4 us, 2.6 sy, 0.0 ni, 83.9 id, 2.9 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.3G 21G 28M 7.3G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS c872b5c0d2a8 policy-opa-pdp 0.21% 12.94MiB / 31.41GiB 0.04% 80.7kB / 78kB 0B / 0B 21 795697fb685d policy-pap 15.16% 478.8MiB / 31.41GiB 1.49% 2.21MB / 1.26MB 0B / 139MB 69 3dadda4f30d0 policy-api 14.34% 419.6MiB / 31.41GiB 1.30% 1.15MB / 1.09MB 0B / 0B 60 9825baa66400 grafana 0.19% 109.1MiB / 31.41GiB 0.34% 19MB / 204kB 0B / 30.7MB 20 750da8f504f7 kafka 1.83% 398.4MiB / 31.41GiB 1.24% 304kB / 290kB 0B / 700kB 83 c01e0fdec983 zookeeper 0.09% 85.98MiB / 31.41GiB 0.27% 58.6kB / 49.8kB 168kB / 434kB 62 cd9ab93d4517 postgres 0.02% 86.39MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 0B / 158MB 26 4f13e72e2f88 prometheus 0.23% 20.4MiB / 31.41GiB 0.06% 276kB / 12kB 0B / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-19T11:46:53.234752448Z level=info msg="Starting Grafana" version=12.0.2 commit=5bda17e7c1cb313eb96266f2fdda73a6b35c3977 branch=HEAD compiled=2025-06-19T11:46:53Z grafana | logger=settings t=2025-06-19T11:46:53.235112907Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-19T11:46:53.235128157Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-19T11:46:53.235132227Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-19T11:46:53.235136098Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-19T11:46:53.235139718Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-19T11:46:53.235143268Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-19T11:46:53.235147268Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-19T11:46:53.235151408Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-19T11:46:53.235154538Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-19T11:46:53.235158788Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-19T11:46:53.235162698Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-19T11:46:53.235166278Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-19T11:46:53.235174479Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-19T11:46:53.235177849Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-19T11:46:53.235185139Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-19T11:46:53.235189149Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-19T11:46:53.235191949Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-19T11:46:53.235197089Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-19T11:46:53.235704411Z level=info msg=FeatureToggles logRowsPopoverMenu=true pinNavItems=true tlsMemcached=true logsContextDatasourceUi=true alertRuleRestore=true reportingUseRawTimeRange=true grafanaconThemes=true prometheusUsesCombobox=true lokiLabelNamesQueryApi=true prometheusAzureOverrideAudience=true alertingInsights=true azureMonitorPrometheusExemplars=true alertingNotificationsStepMode=true nestedFolders=true kubernetesPlaylists=true dashboardScene=true cloudWatchCrossAccountQuerying=true addFieldFromCalculationStatFunctions=true logsInfiniteScrolling=true preinstallAutoUpdate=true correlations=true alertingQueryAndExpressionsStepMode=true cloudWatchRoundUpEndTime=true alertingUIOptimizeReducer=true alertingRuleVersionHistoryRestore=true unifiedStorageSearchPermissionFiltering=true alertingSimplifiedRouting=true groupToNestedTableTransformation=true panelMonitoring=true dataplaneFrontendFallback=true promQLScope=true influxdbBackendMigration=true lokiQuerySplitting=true ssoSettingsSAML=true externalCorePlugins=true kubernetesClientDashboardsFolders=true dashboardSceneSolo=true angularDeprecationUI=true onPremToCloudMigrations=true newDashboardSharingComponent=true azureMonitorEnableUserAuth=true unifiedRequestLog=true alertingRuleRecoverDeleted=true alertingApiServer=true lokiQueryHints=true dashgpt=true logsPanelControls=true failWrongDSUID=true annotationPermissionUpdate=true dashboardSceneForViewers=true useSessionStorageForRedirection=true ssoSettingsApi=true recordedQueriesMulti=true recoveryThreshold=true newFiltersUI=true logsExploreTableVisualisation=true lokiStructuredMetadata=true pluginsDetailsRightPanel=true transformationsRedesign=true cloudWatchNewLabelParsing=true formatString=true awsAsyncQueryCaching=true alertingRulePermanentlyDelete=true publicDashboardsScene=true newPDFRendering=true grafana | logger=sqlstore t=2025-06-19T11:46:53.235764883Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-19T11:46:53.235783093Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-19T11:46:53.238175862Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-19T11:46:53.238201503Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-19T11:46:53.238890329Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-19T11:46:53.239924895Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.033986ms grafana | logger=migrator t=2025-06-19T11:46:53.253191211Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-19T11:46:53.254612426Z level=info msg="Migration successfully executed" id="create user table" duration=1.422376ms grafana | logger=migrator t=2025-06-19T11:46:53.26210353Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-19T11:46:53.263272938Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.169228ms grafana | logger=migrator t=2025-06-19T11:46:53.270343013Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-19T11:46:53.271501311Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.161279ms grafana | logger=migrator t=2025-06-19T11:46:53.278981665Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-19T11:46:53.279672191Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=690.166µs grafana | logger=migrator t=2025-06-19T11:46:53.285934125Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-19T11:46:53.287299059Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.364304ms grafana | logger=migrator t=2025-06-19T11:46:53.295032158Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-19T11:46:53.297488548Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.45621ms grafana | logger=migrator t=2025-06-19T11:46:53.316455624Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-19T11:46:53.317219133Z level=info msg="Migration successfully executed" id="create user table v2" duration=764.109µs grafana | logger=migrator t=2025-06-19T11:46:53.322292848Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-19T11:46:53.322834371Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=541.843µs grafana | logger=migrator t=2025-06-19T11:46:53.328454299Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-19T11:46:53.329579397Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.123898ms grafana | logger=migrator t=2025-06-19T11:46:53.335042031Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:53.335592835Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=550.693µs grafana | logger=migrator t=2025-06-19T11:46:53.341326775Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-19T11:46:53.342175436Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=848.481µs grafana | logger=migrator t=2025-06-19T11:46:53.346779259Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-19T11:46:53.348487611Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.712141ms grafana | logger=migrator t=2025-06-19T11:46:53.352828678Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-19T11:46:53.352860398Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.13µs grafana | logger=migrator t=2025-06-19T11:46:53.358868576Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-19T11:46:53.360414344Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.546888ms grafana | logger=migrator t=2025-06-19T11:46:53.365719765Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-19T11:46:53.366073773Z level=info msg="Migration successfully executed" id="Add missing user data" duration=353.038µs grafana | logger=migrator t=2025-06-19T11:46:53.370374628Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-19T11:46:53.371476006Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.097608ms grafana | logger=migrator t=2025-06-19T11:46:53.376270123Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-19T11:46:53.377577956Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.307042ms grafana | logger=migrator t=2025-06-19T11:46:53.384332511Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-19T11:46:53.386276819Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.942108ms grafana | logger=migrator t=2025-06-19T11:46:53.390579004Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-19T11:46:53.398523429Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.944045ms grafana | logger=migrator t=2025-06-19T11:46:53.405552432Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-19T11:46:53.406416503Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=863.211µs grafana | logger=migrator t=2025-06-19T11:46:53.410353251Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-19T11:46:53.410696339Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=341.928µs grafana | logger=migrator t=2025-06-19T11:46:53.418745807Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-19T11:46:53.419882254Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.135987ms grafana | logger=migrator t=2025-06-19T11:46:53.425124633Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-19T11:46:53.427071311Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.946778ms grafana | logger=migrator t=2025-06-19T11:46:53.436697797Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-19T11:46:53.437077156Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=379.299µs grafana | logger=migrator t=2025-06-19T11:46:53.445464702Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-19T11:46:53.446515328Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=1.043745ms grafana | logger=migrator t=2025-06-19T11:46:53.451340256Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-19T11:46:53.452084545Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=743.258µs grafana | logger=migrator t=2025-06-19T11:46:53.457065537Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-19T11:46:53.457437726Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=372.169µs grafana | logger=migrator t=2025-06-19T11:46:53.461467566Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-19T11:46:53.462383788Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=914.692µs grafana | logger=migrator t=2025-06-19T11:46:53.468648001Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-19T11:46:53.46980219Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.153819ms grafana | logger=migrator t=2025-06-19T11:46:53.475463509Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-19T11:46:53.476181427Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=716.448µs grafana | logger=migrator t=2025-06-19T11:46:53.482302317Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-19T11:46:53.483478516Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.172229ms grafana | logger=migrator t=2025-06-19T11:46:53.490128859Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-19T11:46:53.491273607Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.143688ms grafana | logger=migrator t=2025-06-19T11:46:53.497329466Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-19T11:46:53.497373797Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=42.451µs grafana | logger=migrator t=2025-06-19T11:46:53.503653791Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-19T11:46:53.504648165Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=990.944µs grafana | logger=migrator t=2025-06-19T11:46:53.509339141Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-19T11:46:53.510339965Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.000614ms grafana | logger=migrator t=2025-06-19T11:46:53.516070006Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-19T11:46:53.516687972Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=618.886µs grafana | logger=migrator t=2025-06-19T11:46:53.521255543Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-19T11:46:53.522231897Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=975.824µs grafana | logger=migrator t=2025-06-19T11:46:53.528488321Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:53.53292237Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.433979ms grafana | logger=migrator t=2025-06-19T11:46:53.537846651Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-19T11:46:53.538748013Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=901.072µs grafana | logger=migrator t=2025-06-19T11:46:53.54351758Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-19T11:46:53.544258918Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=741.138µs grafana | logger=migrator t=2025-06-19T11:46:53.56509002Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-19T11:46:53.565986122Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=899.172µs grafana | logger=migrator t=2025-06-19T11:46:53.571590059Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-19T11:46:53.572738708Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.148059ms grafana | logger=migrator t=2025-06-19T11:46:53.576573332Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-19T11:46:53.57771948Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.142428ms grafana | logger=migrator t=2025-06-19T11:46:53.584823405Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:53.585222664Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=397.979µs grafana | logger=migrator t=2025-06-19T11:46:53.589247003Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:53.590045243Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=795.41µs grafana | logger=migrator t=2025-06-19T11:46:53.59483498Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-19T11:46:53.595286142Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=451.322µs grafana | logger=migrator t=2025-06-19T11:46:53.599326061Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-19T11:46:53.599993537Z level=info msg="Migration successfully executed" id="create star table" duration=665.466µs grafana | logger=migrator t=2025-06-19T11:46:53.606885297Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-19T11:46:53.608020804Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.131388ms grafana | logger=migrator t=2025-06-19T11:46:53.611753705Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-19T11:46:53.614524853Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=2.766578ms grafana | logger=migrator t=2025-06-19T11:46:53.618897661Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-19T11:46:53.620332627Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.435146ms grafana | logger=migrator t=2025-06-19T11:46:53.623904864Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-19T11:46:53.625293218Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.387684ms grafana | logger=migrator t=2025-06-19T11:46:53.631172412Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-19T11:46:53.631925331Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=751.329µs grafana | logger=migrator t=2025-06-19T11:46:53.637497758Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-19T11:46:53.63880094Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.301952ms grafana | logger=migrator t=2025-06-19T11:46:53.643103896Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-19T11:46:53.643797453Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=693.437µs grafana | logger=migrator t=2025-06-19T11:46:53.647107184Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-19T11:46:53.64777473Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=666.926µs grafana | logger=migrator t=2025-06-19T11:46:53.654745572Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-19T11:46:53.655947341Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.202239ms grafana | logger=migrator t=2025-06-19T11:46:53.659635542Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-19T11:46:53.660967204Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.291531ms grafana | logger=migrator t=2025-06-19T11:46:53.664940021Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-19T11:46:53.666203523Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.262992ms grafana | logger=migrator t=2025-06-19T11:46:53.672122008Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-19T11:46:53.672165599Z level=info msg="Migration successfully executed" id="Update org table charset" duration=44.661µs grafana | logger=migrator t=2025-06-19T11:46:53.676551357Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-19T11:46:53.676616299Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=65.982µs grafana | logger=migrator t=2025-06-19T11:46:53.691955765Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-19T11:46:53.692329174Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=372.549µs grafana | logger=migrator t=2025-06-19T11:46:53.699227104Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-19T11:46:53.700549616Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.318562ms grafana | logger=migrator t=2025-06-19T11:46:53.708412609Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-19T11:46:53.709812493Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.399424ms grafana | logger=migrator t=2025-06-19T11:46:53.714596071Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-19T11:46:53.715518673Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=922.282µs grafana | logger=migrator t=2025-06-19T11:46:53.719784669Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-19T11:46:53.72106775Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.255741ms grafana | logger=migrator t=2025-06-19T11:46:53.726474262Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-19T11:46:53.727396356Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=919.994µs grafana | logger=migrator t=2025-06-19T11:46:53.732469181Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-19T11:46:53.733368992Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=898.812µs grafana | logger=migrator t=2025-06-19T11:46:53.737502633Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-19T11:46:53.742749812Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.246019ms grafana | logger=migrator t=2025-06-19T11:46:53.749761625Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-19T11:46:53.750699208Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=936.983µs grafana | logger=migrator t=2025-06-19T11:46:53.756748256Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-19T11:46:53.757801243Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.053586ms grafana | logger=migrator t=2025-06-19T11:46:53.762435156Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-19T11:46:53.763862391Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.426645ms grafana | logger=migrator t=2025-06-19T11:46:53.769880539Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:53.770275388Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=394.419µs grafana | logger=migrator t=2025-06-19T11:46:53.774872302Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-19T11:46:53.776138312Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.26433ms grafana | logger=migrator t=2025-06-19T11:46:53.782113659Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-19T11:46:53.78214904Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=36.381µs grafana | logger=migrator t=2025-06-19T11:46:53.788165857Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-19T11:46:53.790173977Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.00743ms grafana | logger=migrator t=2025-06-19T11:46:53.795428806Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-19T11:46:53.796818961Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.391485ms grafana | logger=migrator t=2025-06-19T11:46:53.814450863Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.815912089Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.463256ms grafana | logger=migrator t=2025-06-19T11:46:53.81959244Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.820142073Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=546.893µs grafana | logger=migrator t=2025-06-19T11:46:53.824067219Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.825448343Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.380574ms grafana | logger=migrator t=2025-06-19T11:46:53.830422375Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.8309996Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=575.165µs grafana | logger=migrator t=2025-06-19T11:46:53.834541967Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-19T11:46:53.83508403Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=542.874µs grafana | logger=migrator t=2025-06-19T11:46:53.838543734Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-19T11:46:53.838563585Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=20.281µs grafana | logger=migrator t=2025-06-19T11:46:53.844021109Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-19T11:46:53.844041749Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.13µs grafana | logger=migrator t=2025-06-19T11:46:53.847494785Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.84894945Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.454045ms grafana | logger=migrator t=2025-06-19T11:46:53.852596809Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.855333647Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.733798ms grafana | logger=migrator t=2025-06-19T11:46:53.859207462Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.861232371Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.023929ms grafana | logger=migrator t=2025-06-19T11:46:53.869380032Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.871368831Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.985149ms grafana | logger=migrator t=2025-06-19T11:46:53.876173429Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-19T11:46:53.87661995Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=446.781µs grafana | logger=migrator t=2025-06-19T11:46:53.881177062Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:53.882517855Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.345192ms grafana | logger=migrator t=2025-06-19T11:46:53.886575005Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-19T11:46:53.887561159Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=987.474µs grafana | logger=migrator t=2025-06-19T11:46:53.893763921Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-19T11:46:53.893805832Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=45.371µs grafana | logger=migrator t=2025-06-19T11:46:53.898040376Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-19T11:46:53.899167283Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.123067ms grafana | logger=migrator t=2025-06-19T11:46:53.902773152Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-19T11:46:53.903633613Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=856.451µs grafana | logger=migrator t=2025-06-19T11:46:53.909831245Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:53.915514925Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.68226ms grafana | logger=migrator t=2025-06-19T11:46:53.919432561Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-19T11:46:53.92017816Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=745.659µs grafana | logger=migrator t=2025-06-19T11:46:53.924379623Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-19T11:46:53.925923161Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.542869ms grafana | logger=migrator t=2025-06-19T11:46:53.953161719Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-19T11:46:53.955202699Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=2.04194ms grafana | logger=migrator t=2025-06-19T11:46:53.960193942Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:53.961021032Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=830.05µs grafana | logger=migrator t=2025-06-19T11:46:53.96499097Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:53.965621586Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=630.275µs grafana | logger=migrator t=2025-06-19T11:46:53.972167806Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-19T11:46:53.975230091Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.060995ms grafana | logger=migrator t=2025-06-19T11:46:53.979730762Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-19T11:46:53.981382642Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.65032ms grafana | logger=migrator t=2025-06-19T11:46:53.985436272Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-19T11:46:53.985688188Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=251.876µs grafana | logger=migrator t=2025-06-19T11:46:53.990992388Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-19T11:46:53.991180363Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=188.105µs grafana | logger=migrator t=2025-06-19T11:46:53.99433973Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-19T11:46:53.99513786Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=795.54µs grafana | logger=migrator t=2025-06-19T11:46:54.000094992Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-19T11:46:54.002495171Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.399109ms grafana | logger=migrator t=2025-06-19T11:46:54.009747101Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-19T11:46:54.01214438Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.396859ms grafana | logger=migrator t=2025-06-19T11:46:54.016446747Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-19T11:46:54.017275857Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=828.58µs grafana | logger=migrator t=2025-06-19T11:46:54.021068992Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-19T11:46:54.023460981Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=2.391079ms grafana | logger=migrator t=2025-06-19T11:46:54.028153557Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-19T11:46:54.030406013Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.252336ms grafana | logger=migrator t=2025-06-19T11:46:54.034004662Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-19T11:46:54.034406382Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=399.23µs grafana | logger=migrator t=2025-06-19T11:46:54.037824278Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-19T11:46:54.041022857Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.198329ms grafana | logger=migrator t=2025-06-19T11:46:54.044651256Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-19T11:46:54.045497927Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=846.201µs grafana | logger=migrator t=2025-06-19T11:46:54.050130312Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-19T11:46:54.050557103Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=426.781µs grafana | logger=migrator t=2025-06-19T11:46:54.054412918Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-19T11:46:54.05529136Z level=info msg="Migration successfully executed" id="create data_source table" duration=879.192µs grafana | logger=migrator t=2025-06-19T11:46:54.059103745Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-19T11:46:54.059955975Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=851.73µs grafana | logger=migrator t=2025-06-19T11:46:54.065059152Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-19T11:46:54.066721514Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.661482ms grafana | logger=migrator t=2025-06-19T11:46:54.08192521Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-19T11:46:54.083663034Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.733894ms grafana | logger=migrator t=2025-06-19T11:46:54.089099419Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-19T11:46:54.089859127Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.278µs grafana | logger=migrator t=2025-06-19T11:46:54.094833491Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-19T11:46:54.101989948Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.155407ms grafana | logger=migrator t=2025-06-19T11:46:54.106675994Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-19T11:46:54.107277199Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=600.925µs grafana | logger=migrator t=2025-06-19T11:46:54.112946709Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-19T11:46:54.114663052Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.715793ms grafana | logger=migrator t=2025-06-19T11:46:54.123883271Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-19T11:46:54.124732562Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=845.471µs grafana | logger=migrator t=2025-06-19T11:46:54.128882055Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-19T11:46:54.12990071Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.017075ms grafana | logger=migrator t=2025-06-19T11:46:54.136876303Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-19T11:46:54.139461357Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.583744ms grafana | logger=migrator t=2025-06-19T11:46:54.144562934Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-19T11:46:54.147127088Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.563114ms grafana | logger=migrator t=2025-06-19T11:46:54.152562001Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-19T11:46:54.152660495Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=102.224µs grafana | logger=migrator t=2025-06-19T11:46:54.157132225Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-19T11:46:54.157593466Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=462.621µs grafana | logger=migrator t=2025-06-19T11:46:54.164742034Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-19T11:46:54.16782552Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.084006ms grafana | logger=migrator t=2025-06-19T11:46:54.177705125Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-19T11:46:54.177996753Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=293.378µs grafana | logger=migrator t=2025-06-19T11:46:54.185252122Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-19T11:46:54.185608011Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=355.219µs grafana | logger=migrator t=2025-06-19T11:46:54.191960349Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-19T11:46:54.195097757Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.137468ms grafana | logger=migrator t=2025-06-19T11:46:54.212281863Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-19T11:46:54.213535183Z level=info msg="Migration successfully executed" id="Update uid value" duration=1.266301ms grafana | logger=migrator t=2025-06-19T11:46:54.223802298Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:54.224802033Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.001595ms grafana | logger=migrator t=2025-06-19T11:46:54.230195997Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-19T11:46:54.230951145Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=754.828µs grafana | logger=migrator t=2025-06-19T11:46:54.238102093Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-19T11:46:54.240675907Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.573474ms grafana | logger=migrator t=2025-06-19T11:46:54.247306751Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-19T11:46:54.25050552Z level=info msg="Migration successfully executed" id="Add api_version column" duration=3.195829ms grafana | logger=migrator t=2025-06-19T11:46:54.260243372Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-19T11:46:54.260303603Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=64.401µs grafana | logger=migrator t=2025-06-19T11:46:54.268284871Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-19T11:46:54.269514731Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.22923ms grafana | logger=migrator t=2025-06-19T11:46:54.275149721Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-19T11:46:54.275784197Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=634.136µs grafana | logger=migrator t=2025-06-19T11:46:54.281795707Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-19T11:46:54.283778515Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.981298ms grafana | logger=migrator t=2025-06-19T11:46:54.28882832Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-19T11:46:54.291885586Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=3.029785ms grafana | logger=migrator t=2025-06-19T11:46:54.298636034Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-19T11:46:54.299724931Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.091267ms grafana | logger=migrator t=2025-06-19T11:46:54.308436357Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-19T11:46:54.309735089Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.302783ms grafana | logger=migrator t=2025-06-19T11:46:54.315157934Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-19T11:46:54.316168639Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.008784ms grafana | logger=migrator t=2025-06-19T11:46:54.333830036Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-19T11:46:54.342527992Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.703176ms grafana | logger=migrator t=2025-06-19T11:46:54.348403897Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-19T11:46:54.349048914Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=642.897µs grafana | logger=migrator t=2025-06-19T11:46:54.360326674Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-19T11:46:54.361684247Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.360133ms grafana | logger=migrator t=2025-06-19T11:46:54.370496545Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-19T11:46:54.371562782Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.067607ms grafana | logger=migrator t=2025-06-19T11:46:54.380122464Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-19T11:46:54.381222821Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.102947ms grafana | logger=migrator t=2025-06-19T11:46:54.391093237Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:54.391440375Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=347.969µs grafana | logger=migrator t=2025-06-19T11:46:54.397402823Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-19T11:46:54.398057229Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=651.426µs grafana | logger=migrator t=2025-06-19T11:46:54.404829017Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-19T11:46:54.404860448Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=32.191µs grafana | logger=migrator t=2025-06-19T11:46:54.411539453Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-19T11:46:54.414409224Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.869351ms grafana | logger=migrator t=2025-06-19T11:46:54.419722676Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-19T11:46:54.421630684Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.908228ms grafana | logger=migrator t=2025-06-19T11:46:54.426600206Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-19T11:46:54.426781171Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=178.495µs grafana | logger=migrator t=2025-06-19T11:46:54.433482727Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-19T11:46:54.436303967Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.82028ms grafana | logger=migrator t=2025-06-19T11:46:54.463186753Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-19T11:46:54.466484096Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.298733ms grafana | logger=migrator t=2025-06-19T11:46:54.471552951Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-19T11:46:54.472405713Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=852.082µs grafana | logger=migrator t=2025-06-19T11:46:54.479049217Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-19T11:46:54.479604571Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=558.643µs grafana | logger=migrator t=2025-06-19T11:46:54.487284561Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-19T11:46:54.488953943Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.669522ms grafana | logger=migrator t=2025-06-19T11:46:54.494249443Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-19T11:46:54.495263899Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.014156ms grafana | logger=migrator t=2025-06-19T11:46:54.500881708Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-19T11:46:54.501825342Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=945.193µs grafana | logger=migrator t=2025-06-19T11:46:54.510310752Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-19T11:46:54.511233185Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=921.633µs grafana | logger=migrator t=2025-06-19T11:46:54.520415763Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-19T11:46:54.520493895Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=80.202µs grafana | logger=migrator t=2025-06-19T11:46:54.528125724Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-19T11:46:54.528184666Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=62.262µs grafana | logger=migrator t=2025-06-19T11:46:54.533651161Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-19T11:46:54.538503181Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.85187ms grafana | logger=migrator t=2025-06-19T11:46:54.545085295Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-19T11:46:54.547783282Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.697367ms grafana | logger=migrator t=2025-06-19T11:46:54.556947289Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-19T11:46:54.557080452Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=139.964µs grafana | logger=migrator t=2025-06-19T11:46:54.563497451Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-19T11:46:54.564534217Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.036466ms grafana | logger=migrator t=2025-06-19T11:46:54.584646895Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-19T11:46:54.585965518Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.315223ms grafana | logger=migrator t=2025-06-19T11:46:54.591969386Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-19T11:46:54.592008327Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=39.951µs grafana | logger=migrator t=2025-06-19T11:46:54.597956335Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-19T11:46:54.599202256Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.245321ms grafana | logger=migrator t=2025-06-19T11:46:54.606015515Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-19T11:46:54.607253526Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.237371ms grafana | logger=migrator t=2025-06-19T11:46:54.611111471Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-19T11:46:54.614455214Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.343163ms grafana | logger=migrator t=2025-06-19T11:46:54.61954424Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-19T11:46:54.619578131Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=30.831µs grafana | logger=migrator t=2025-06-19T11:46:54.628560794Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-19T11:46:54.628912193Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=352.349µs grafana | logger=migrator t=2025-06-19T11:46:54.633365643Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-19T11:46:54.642203732Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=8.836569ms grafana | logger=migrator t=2025-06-19T11:46:54.648369315Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-19T11:46:54.649852742Z level=info msg="Migration successfully executed" id="create session table" duration=1.482897ms grafana | logger=migrator t=2025-06-19T11:46:54.657310987Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-19T11:46:54.65740613Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=95.522µs grafana | logger=migrator t=2025-06-19T11:46:54.662868655Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-19T11:46:54.662993148Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=124.933µs grafana | logger=migrator t=2025-06-19T11:46:54.668465384Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-19T11:46:54.669887259Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.421904ms grafana | logger=migrator t=2025-06-19T11:46:54.675389465Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-19T11:46:54.676159734Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=767.359µs grafana | logger=migrator t=2025-06-19T11:46:54.683962197Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-19T11:46:54.684006259Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.971µs grafana | logger=migrator t=2025-06-19T11:46:54.69047664Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-19T11:46:54.69050957Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=34.29µs grafana | logger=migrator t=2025-06-19T11:46:54.709990283Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-19T11:46:54.714892804Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.901252ms grafana | logger=migrator t=2025-06-19T11:46:54.721808926Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-19T11:46:54.72559241Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.782434ms grafana | logger=migrator t=2025-06-19T11:46:54.73083552Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-19T11:46:54.730922972Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=87.652µs grafana | logger=migrator t=2025-06-19T11:46:54.73648078Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-19T11:46:54.736723826Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=243.076µs grafana | logger=migrator t=2025-06-19T11:46:54.742796956Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-19T11:46:54.744185091Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.388075ms grafana | logger=migrator t=2025-06-19T11:46:54.750851476Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-19T11:46:54.750876527Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=25.721µs grafana | logger=migrator t=2025-06-19T11:46:54.754912357Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-19T11:46:54.760149286Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.235539ms grafana | logger=migrator t=2025-06-19T11:46:54.765446138Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-19T11:46:54.765624562Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=176.444µs grafana | logger=migrator t=2025-06-19T11:46:54.772122444Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-19T11:46:54.777356244Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.2344ms grafana | logger=migrator t=2025-06-19T11:46:54.782548092Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-19T11:46:54.785904896Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.356284ms grafana | logger=migrator t=2025-06-19T11:46:54.791609867Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-19T11:46:54.791625157Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=15.66µs grafana | logger=migrator t=2025-06-19T11:46:54.798748564Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-19T11:46:54.800113918Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.364955ms grafana | logger=migrator t=2025-06-19T11:46:54.807276355Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-19T11:46:54.808153377Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=878.192µs grafana | logger=migrator t=2025-06-19T11:46:54.817618462Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-19T11:46:54.819146329Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.529407ms grafana | logger=migrator t=2025-06-19T11:46:54.837412062Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-19T11:46:54.838731785Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.318423ms grafana | logger=migrator t=2025-06-19T11:46:54.845247867Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-19T11:46:54.846699032Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.450075ms grafana | logger=migrator t=2025-06-19T11:46:54.854584368Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-19T11:46:54.856115616Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.531198ms grafana | logger=migrator t=2025-06-19T11:46:55.021379424Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-19T11:46:55.022634595Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.258131ms grafana | logger=migrator t=2025-06-19T11:46:55.120099441Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-19T11:46:55.121457705Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.360764ms grafana | logger=migrator t=2025-06-19T11:46:55.135673287Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-19T11:46:55.136662802Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=991.645µs grafana | logger=migrator t=2025-06-19T11:46:55.141950593Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-19T11:46:55.151558091Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.606418ms grafana | logger=migrator t=2025-06-19T11:46:55.158421232Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-19T11:46:55.15917622Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=753.978µs grafana | logger=migrator t=2025-06-19T11:46:55.16480062Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-19T11:46:55.165949288Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.148008ms grafana | logger=migrator t=2025-06-19T11:46:55.169928787Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:55.170354858Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=425.631µs grafana | logger=migrator t=2025-06-19T11:46:55.178606532Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-19T11:46:55.17973569Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.133998ms grafana | logger=migrator t=2025-06-19T11:46:55.185445391Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-19T11:46:55.186458587Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.013136ms grafana | logger=migrator t=2025-06-19T11:46:55.192173749Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-19T11:46:55.196005163Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.831644ms grafana | logger=migrator t=2025-06-19T11:46:55.200217047Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-19T11:46:55.204180796Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.963279ms grafana | logger=migrator t=2025-06-19T11:46:55.208357879Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-19T11:46:55.212187635Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.829256ms grafana | logger=migrator t=2025-06-19T11:46:55.225902665Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-19T11:46:55.230045277Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.146442ms grafana | logger=migrator t=2025-06-19T11:46:55.233918583Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-19T11:46:55.234750144Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=831.161µs grafana | logger=migrator t=2025-06-19T11:46:55.245010738Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-19T11:46:55.245063669Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=53.181µs grafana | logger=migrator t=2025-06-19T11:46:55.253973921Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-19T11:46:55.254000852Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=26.981µs grafana | logger=migrator t=2025-06-19T11:46:55.258722338Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-19T11:46:55.259737014Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.014736ms grafana | logger=migrator t=2025-06-19T11:46:55.265516717Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-19T11:46:55.266511101Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=994.194µs grafana | logger=migrator t=2025-06-19T11:46:55.272968101Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-19T11:46:55.274261484Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.293173ms grafana | logger=migrator t=2025-06-19T11:46:55.279966965Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-19T11:46:55.281259057Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.288672ms grafana | logger=migrator t=2025-06-19T11:46:55.285340879Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-19T11:46:55.286037626Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=693.046µs grafana | logger=migrator t=2025-06-19T11:46:55.294555027Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-19T11:46:55.299358806Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.81013ms grafana | logger=migrator t=2025-06-19T11:46:55.304655668Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-19T11:46:55.308119223Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.463606ms grafana | logger=migrator t=2025-06-19T11:46:55.312263525Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-19T11:46:55.312477552Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=213.077µs grafana | logger=migrator t=2025-06-19T11:46:55.321653729Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:55.323114745Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.463596ms grafana | logger=migrator t=2025-06-19T11:46:55.327876923Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-19T11:46:55.328785795Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=883.662µs grafana | logger=migrator t=2025-06-19T11:46:55.33463511Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-19T11:46:55.339293146Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.656966ms grafana | logger=migrator t=2025-06-19T11:46:55.360501622Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-19T11:46:55.360570224Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=73.782µs grafana | logger=migrator t=2025-06-19T11:46:55.365400613Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-19T11:46:55.366657225Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.256162ms grafana | logger=migrator t=2025-06-19T11:46:55.370297825Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-19T11:46:55.371408323Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.110208ms grafana | logger=migrator t=2025-06-19T11:46:55.378279743Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-19T11:46:55.378455087Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=176.514µs grafana | logger=migrator t=2025-06-19T11:46:55.382287152Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-19T11:46:55.38342215Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.134728ms grafana | logger=migrator t=2025-06-19T11:46:55.387589253Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-19T11:46:55.389221594Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.632971ms grafana | logger=migrator t=2025-06-19T11:46:55.395287184Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-19T11:46:55.396254078Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=961.484µs grafana | logger=migrator t=2025-06-19T11:46:55.400602856Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-19T11:46:55.401543649Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=940.073µs grafana | logger=migrator t=2025-06-19T11:46:55.405214401Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-19T11:46:55.406237036Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.022025ms grafana | logger=migrator t=2025-06-19T11:46:55.412253305Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-19T11:46:55.413823874Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.569539ms grafana | logger=migrator t=2025-06-19T11:46:55.419119416Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-19T11:46:55.419154127Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=35.071µs grafana | logger=migrator t=2025-06-19T11:46:55.422980001Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.427278648Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.298027ms grafana | logger=migrator t=2025-06-19T11:46:55.434279561Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-19T11:46:55.435121332Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=841.381µs grafana | logger=migrator t=2025-06-19T11:46:55.438494886Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.442670429Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.171353ms grafana | logger=migrator t=2025-06-19T11:46:55.446133175Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-19T11:46:55.446788861Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=655.536µs grafana | logger=migrator t=2025-06-19T11:46:55.454382579Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-19T11:46:55.455759684Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.381015ms grafana | logger=migrator t=2025-06-19T11:46:55.461624809Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-19T11:46:55.462568122Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=943.773µs grafana | logger=migrator t=2025-06-19T11:46:55.483876771Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-19T11:46:55.495265174Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.386522ms grafana | logger=migrator t=2025-06-19T11:46:55.49958628Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-19T11:46:55.50034511Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=758.13µs grafana | logger=migrator t=2025-06-19T11:46:55.507267231Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-19T11:46:55.508181253Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=913.222µs grafana | logger=migrator t=2025-06-19T11:46:55.513016303Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-19T11:46:55.513362602Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=346.159µs grafana | logger=migrator t=2025-06-19T11:46:55.517921795Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-19T11:46:55.51853789Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=615.485µs grafana | logger=migrator t=2025-06-19T11:46:55.525087972Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-19T11:46:55.525536294Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=448.782µs grafana | logger=migrator t=2025-06-19T11:46:55.53099025Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.535159362Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.168502ms grafana | logger=migrator t=2025-06-19T11:46:55.539018898Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.543142121Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.122252ms grafana | logger=migrator t=2025-06-19T11:46:55.547130649Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.548148445Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.016746ms grafana | logger=migrator t=2025-06-19T11:46:55.553944939Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.555457575Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.512597ms grafana | logger=migrator t=2025-06-19T11:46:55.560097921Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-19T11:46:55.560400058Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=301.577µs grafana | logger=migrator t=2025-06-19T11:46:55.565469734Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-19T11:46:55.572328284Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.86885ms grafana | logger=migrator t=2025-06-19T11:46:55.585110881Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-19T11:46:55.58628004Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.164519ms grafana | logger=migrator t=2025-06-19T11:46:55.592386351Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-19T11:46:55.592888894Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=501.373µs grafana | logger=migrator t=2025-06-19T11:46:55.636730121Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-19T11:46:55.637278974Z level=info msg="Migration successfully executed" id="Move region to single row" duration=549.794µs grafana | logger=migrator t=2025-06-19T11:46:55.641631062Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.643407747Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.776125ms grafana | logger=migrator t=2025-06-19T11:46:55.650513752Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.651897707Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.381845ms grafana | logger=migrator t=2025-06-19T11:46:55.656634634Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.657812904Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.17755ms grafana | logger=migrator t=2025-06-19T11:46:55.664376786Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.665559196Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.18229ms grafana | logger=migrator t=2025-06-19T11:46:55.671656437Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.673053911Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.401684ms grafana | logger=migrator t=2025-06-19T11:46:55.678316842Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-19T11:46:55.679476361Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.158699ms grafana | logger=migrator t=2025-06-19T11:46:55.685944271Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-19T11:46:55.685967322Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=23.531µs grafana | logger=migrator t=2025-06-19T11:46:55.690297429Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-19T11:46:55.69035005Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=53.891µs grafana | logger=migrator t=2025-06-19T11:46:55.694829871Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-19T11:46:55.694861562Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=39.541µs grafana | logger=migrator t=2025-06-19T11:46:55.702061611Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-19T11:46:55.703553818Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.487566ms grafana | logger=migrator t=2025-06-19T11:46:55.709074624Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-19T11:46:55.710645374Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.56921ms grafana | logger=migrator t=2025-06-19T11:46:55.714894139Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-19T11:46:55.715915565Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.021086ms grafana | logger=migrator t=2025-06-19T11:46:55.73914261Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-19T11:46:55.741282834Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=2.140013ms grafana | logger=migrator t=2025-06-19T11:46:55.746636386Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-19T11:46:55.746940484Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=303.208µs grafana | logger=migrator t=2025-06-19T11:46:55.752955093Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-19T11:46:55.753756582Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=800.649µs grafana | logger=migrator t=2025-06-19T11:46:55.75972969Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-19T11:46:55.759761501Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=33.161µs grafana | logger=migrator t=2025-06-19T11:46:55.768572659Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-19T11:46:55.776687751Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=8.103891ms grafana | logger=migrator t=2025-06-19T11:46:55.816878478Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-19T11:46:55.818473747Z level=info msg="Migration successfully executed" id="create team table" duration=1.595629ms grafana | logger=migrator t=2025-06-19T11:46:55.823317317Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-19T11:46:55.824301091Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=983.074µs grafana | logger=migrator t=2025-06-19T11:46:55.831095Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-19T11:46:55.832199147Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.103497ms grafana | logger=migrator t=2025-06-19T11:46:55.836397112Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-19T11:46:55.841467317Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.069285ms grafana | logger=migrator t=2025-06-19T11:46:55.845861106Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-19T11:46:55.846239235Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=378.239µs grafana | logger=migrator t=2025-06-19T11:46:55.865584105Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:55.867584195Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=2.00576ms grafana | logger=migrator t=2025-06-19T11:46:55.872305791Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-19T11:46:55.877537281Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=5.23652ms grafana | logger=migrator t=2025-06-19T11:46:55.881585812Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-19T11:46:55.887790756Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=6.205344ms grafana | logger=migrator t=2025-06-19T11:46:55.893943468Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-19T11:46:55.894791889Z level=info msg="Migration successfully executed" id="create team member table" duration=847.761µs grafana | logger=migrator t=2025-06-19T11:46:55.90084973Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-19T11:46:55.902521491Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.670801ms grafana | logger=migrator t=2025-06-19T11:46:55.909397312Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-19T11:46:55.911628056Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.230394ms grafana | logger=migrator t=2025-06-19T11:46:55.916071287Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-19T11:46:55.917316557Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.24448ms grafana | logger=migrator t=2025-06-19T11:46:55.921196094Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-19T11:46:55.925336876Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.140732ms grafana | logger=migrator t=2025-06-19T11:46:55.928882805Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-19T11:46:55.932923265Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.03892ms grafana | logger=migrator t=2025-06-19T11:46:55.938639796Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-19T11:46:55.943961788Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.321082ms grafana | logger=migrator t=2025-06-19T11:46:55.951852974Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-19T11:46:55.952554481Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=700.687µs grafana | logger=migrator t=2025-06-19T11:46:55.95771484Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-19T11:46:55.958617212Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=902.472µs grafana | logger=migrator t=2025-06-19T11:46:55.966179569Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-19T11:46:55.967755108Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.574029ms grafana | logger=migrator t=2025-06-19T11:46:55.973600934Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-19T11:46:55.975585452Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.979918ms grafana | logger=migrator t=2025-06-19T11:46:56.003554196Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-19T11:46:56.005309669Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.764634ms grafana | logger=migrator t=2025-06-19T11:46:56.013068399Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-19T11:46:56.014122575Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.053906ms grafana | logger=migrator t=2025-06-19T11:46:56.092175922Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-19T11:46:56.094338535Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=2.161823ms grafana | logger=migrator t=2025-06-19T11:46:56.120430453Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-19T11:46:56.121779245Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.345783ms grafana | logger=migrator t=2025-06-19T11:46:56.12932604Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-19T11:46:56.130281373Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=955.043µs grafana | logger=migrator t=2025-06-19T11:46:56.134187759Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-19T11:46:56.134679641Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=491.812µs grafana | logger=migrator t=2025-06-19T11:46:56.14199246Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-19T11:46:56.142452601Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=461.811µs grafana | logger=migrator t=2025-06-19T11:46:56.151665306Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-19T11:46:56.152843735Z level=info msg="Migration successfully executed" id="create tag table" duration=1.180319ms grafana | logger=migrator t=2025-06-19T11:46:56.157199851Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-19T11:46:56.15837936Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.178679ms grafana | logger=migrator t=2025-06-19T11:46:56.167490963Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-19T11:46:56.168389934Z level=info msg="Migration successfully executed" id="create login attempt table" duration=896.621µs grafana | logger=migrator t=2025-06-19T11:46:56.178539543Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-19T11:46:56.179919506Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.379313ms grafana | logger=migrator t=2025-06-19T11:46:56.184809646Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-19T11:46:56.185527613Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=716.967µs grafana | logger=migrator t=2025-06-19T11:46:56.192549775Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:56.203422221Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.872936ms grafana | logger=migrator t=2025-06-19T11:46:56.208000482Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-19T11:46:56.208924585Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=920.543µs grafana | logger=migrator t=2025-06-19T11:46:56.213013595Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-19T11:46:56.213914817Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=900.752µs grafana | logger=migrator t=2025-06-19T11:46:56.219310799Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:56.219618946Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=308.108µs grafana | logger=migrator t=2025-06-19T11:46:56.22509827Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:56.242389332Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=17.264372ms grafana | logger=migrator t=2025-06-19T11:46:56.248472332Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-19T11:46:56.249816474Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.343702ms grafana | logger=migrator t=2025-06-19T11:46:56.258310892Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-19T11:46:56.259806659Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.500248ms grafana | logger=migrator t=2025-06-19T11:46:56.265386685Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-19T11:46:56.265415166Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=29.751µs grafana | logger=migrator t=2025-06-19T11:46:56.273286538Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.285269911Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=11.972872ms grafana | logger=migrator t=2025-06-19T11:46:56.303471055Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.307898764Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.431269ms grafana | logger=migrator t=2025-06-19T11:46:56.312438504Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.318040741Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.601337ms grafana | logger=migrator t=2025-06-19T11:46:56.323927856Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.329206074Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.276308ms grafana | logger=migrator t=2025-06-19T11:46:56.333167661Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.334154425Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=986.334µs grafana | logger=migrator t=2025-06-19T11:46:56.338218805Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.343925274Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.705539ms grafana | logger=migrator t=2025-06-19T11:46:56.350664948Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-19T11:46:56.359639008Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=8.976019ms grafana | logger=migrator t=2025-06-19T11:46:56.367525691Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-19T11:46:56.368103844Z level=info msg="Migration successfully executed" id="create server_lock table" duration=577.774µs grafana | logger=migrator t=2025-06-19T11:46:56.374003688Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-19T11:46:56.375730471Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.726103ms grafana | logger=migrator t=2025-06-19T11:46:56.381384539Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-19T11:46:56.382315172Z level=info msg="Migration successfully executed" id="create user auth token table" duration=932.703µs grafana | logger=migrator t=2025-06-19T11:46:56.386046993Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-19T11:46:56.387017487Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=970.434µs grafana | logger=migrator t=2025-06-19T11:46:56.393597637Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-19T11:46:56.394522931Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=925.164µs grafana | logger=migrator t=2025-06-19T11:46:56.400057396Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-19T11:46:56.401028449Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=970.663µs grafana | logger=migrator t=2025-06-19T11:46:56.405434837Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-19T11:46:56.411312701Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.877524ms grafana | logger=migrator t=2025-06-19T11:46:56.418724141Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-19T11:46:56.419393368Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=670.957µs grafana | logger=migrator t=2025-06-19T11:46:56.429769011Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-19T11:46:56.437535421Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=7.76421ms grafana | logger=migrator t=2025-06-19T11:46:56.445811794Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-19T11:46:56.446746566Z level=info msg="Migration successfully executed" id="create cache_data table" duration=938.293µs grafana | logger=migrator t=2025-06-19T11:46:56.452272561Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-19T11:46:56.453274026Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.001265ms grafana | logger=migrator t=2025-06-19T11:46:56.510365731Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-19T11:46:56.512008362Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.645171ms grafana | logger=migrator t=2025-06-19T11:46:56.521698028Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-19T11:46:56.523275147Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.577229ms grafana | logger=migrator t=2025-06-19T11:46:56.527702325Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-19T11:46:56.527796737Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=88.583µs grafana | logger=migrator t=2025-06-19T11:46:56.534356938Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-19T11:46:56.534429379Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=72.581µs grafana | logger=migrator t=2025-06-19T11:46:56.540875717Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-19T11:46:56.542442125Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.565618ms grafana | logger=migrator t=2025-06-19T11:46:56.546803731Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T11:46:56.548447482Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.643721ms grafana | logger=migrator t=2025-06-19T11:46:56.552421429Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T11:46:56.553727611Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.301742ms grafana | logger=migrator t=2025-06-19T11:46:56.561023999Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T11:46:56.56105015Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=26.14µs grafana | logger=migrator t=2025-06-19T11:46:56.56554224Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T11:46:56.566575705Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.032985ms grafana | logger=migrator t=2025-06-19T11:46:56.570423609Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T11:46:56.571350312Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=926.332µs grafana | logger=migrator t=2025-06-19T11:46:56.578143257Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-19T11:46:56.579168843Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.028166ms grafana | logger=migrator t=2025-06-19T11:46:56.583973009Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-19T11:46:56.585300452Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.334513ms grafana | logger=migrator t=2025-06-19T11:46:56.590963561Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-19T11:46:56.595512122Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.547661ms grafana | logger=migrator t=2025-06-19T11:46:56.601320974Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-19T11:46:56.602285347Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=958.993µs grafana | logger=migrator t=2025-06-19T11:46:56.607971196Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-19T11:46:56.608222582Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=254.366µs grafana | logger=migrator t=2025-06-19T11:46:56.612149248Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-19T11:46:56.61342495Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.275442ms grafana | logger=migrator t=2025-06-19T11:46:56.618797621Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-19T11:46:56.620004Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.206049ms grafana | logger=migrator t=2025-06-19T11:46:56.642379757Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-19T11:46:56.644507039Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.122692ms grafana | logger=migrator t=2025-06-19T11:46:56.649248365Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T11:46:56.649281886Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=29.981µs grafana | logger=migrator t=2025-06-19T11:46:56.65599389Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-19T11:46:56.657107178Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.112547ms grafana | logger=migrator t=2025-06-19T11:46:56.664845666Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-19T11:46:56.666440665Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.594199ms grafana | logger=migrator t=2025-06-19T11:46:56.69039449Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-19T11:46:56.692176584Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.780834ms grafana | logger=migrator t=2025-06-19T11:46:56.698233212Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-19T11:46:56.699853602Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.62001ms grafana | logger=migrator t=2025-06-19T11:46:56.704341391Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.710672486Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.330435ms grafana | logger=migrator t=2025-06-19T11:46:56.715185526Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.716260383Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.074217ms grafana | logger=migrator t=2025-06-19T11:46:56.721341936Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.722365662Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.023566ms grafana | logger=migrator t=2025-06-19T11:46:56.728679056Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.755336248Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.657552ms grafana | logger=migrator t=2025-06-19T11:46:56.767620698Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.796689668Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.06871ms grafana | logger=migrator t=2025-06-19T11:46:56.906289136Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T11:46:56.9085015Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=2.213634ms grafana | logger=migrator t=2025-06-19T11:46:57.000716134Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-19T11:46:57.002101148Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.387925ms grafana | logger=migrator t=2025-06-19T11:46:57.127229685Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-19T11:46:57.13186274Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.635265ms grafana | logger=migrator t=2025-06-19T11:46:57.205450005Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-19T11:46:57.213028913Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.579078ms grafana | logger=migrator t=2025-06-19T11:46:57.219170004Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-19T11:46:57.226749841Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=7.579637ms grafana | logger=migrator t=2025-06-19T11:46:57.234111603Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-19T11:46:57.235537548Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.425375ms grafana | logger=migrator t=2025-06-19T11:46:57.241637738Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-19T11:46:57.242642644Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.007306ms grafana | logger=migrator t=2025-06-19T11:46:57.248185491Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-19T11:46:57.249261687Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.075677ms grafana | logger=migrator t=2025-06-19T11:46:57.253379368Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T11:46:57.253400718Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=22.31µs grafana | logger=migrator t=2025-06-19T11:46:57.258197897Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.266658816Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.465289ms grafana | logger=migrator t=2025-06-19T11:46:57.271244259Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.276693464Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.443535ms grafana | logger=migrator t=2025-06-19T11:46:57.285289616Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.29235574Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.069514ms grafana | logger=migrator t=2025-06-19T11:46:57.296475392Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-19T11:46:57.29724058Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=765.108µs grafana | logger=migrator t=2025-06-19T11:46:57.302544421Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-19T11:46:57.304308085Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.762124ms grafana | logger=migrator t=2025-06-19T11:46:57.308609901Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.316651559Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.042558ms grafana | logger=migrator t=2025-06-19T11:46:57.332819919Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.341846321Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.028772ms grafana | logger=migrator t=2025-06-19T11:46:57.34588671Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-19T11:46:57.34665013Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=763.46µs grafana | logger=migrator t=2025-06-19T11:46:57.35071931Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-19T11:46:57.356844811Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.125151ms grafana | logger=migrator t=2025-06-19T11:46:57.363180538Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:46:57.370513898Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.32618ms grafana | logger=migrator t=2025-06-19T11:46:57.374340293Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-19T11:46:57.374369074Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=30.051µs grafana | logger=migrator t=2025-06-19T11:46:57.380020403Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:46:57.381136191Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.115618ms grafana | logger=migrator t=2025-06-19T11:46:57.385407186Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-19T11:46:57.386472382Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.064686ms grafana | logger=migrator t=2025-06-19T11:46:57.392509621Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-19T11:46:57.393943517Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.432446ms grafana | logger=migrator t=2025-06-19T11:46:57.402708663Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-19T11:46:57.402770635Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=67.242µs grafana | logger=migrator t=2025-06-19T11:46:57.409160302Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:46:57.415533179Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.370697ms grafana | logger=migrator t=2025-06-19T11:46:57.420359598Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:46:57.426186842Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.825994ms grafana | logger=migrator t=2025-06-19T11:46:57.431088483Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:46:57.43784635Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.757077ms grafana | logger=migrator t=2025-06-19T11:46:57.455256939Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:46:57.462212871Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.961632ms grafana | logger=migrator t=2025-06-19T11:46:57.469542893Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-19T11:46:57.476687098Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.139106ms grafana | logger=migrator t=2025-06-19T11:46:57.480307407Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:46:57.480323278Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=16.421µs grafana | logger=migrator t=2025-06-19T11:46:57.484002319Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-19T11:46:57.484607144Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=605.085µs grafana | logger=migrator t=2025-06-19T11:46:57.490730105Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.498500027Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.754101ms grafana | logger=migrator t=2025-06-19T11:46:57.50713045Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-19T11:46:57.50714992Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=19.76µs grafana | logger=migrator t=2025-06-19T11:46:57.513070636Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.524744754Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=11.672938ms grafana | logger=migrator t=2025-06-19T11:46:57.528088306Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-19T11:46:57.529224385Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.137749ms grafana | logger=migrator t=2025-06-19T11:46:57.535303575Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.542103802Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.799517ms grafana | logger=migrator t=2025-06-19T11:46:57.546823229Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-19T11:46:57.54770623Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=877.171µs grafana | logger=migrator t=2025-06-19T11:46:57.551437013Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-19T11:46:57.55256195Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.124417ms grafana | logger=migrator t=2025-06-19T11:46:57.559206195Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.566174046Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.964681ms grafana | logger=migrator t=2025-06-19T11:46:57.577904066Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-19T11:46:57.579015234Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.111718ms grafana | logger=migrator t=2025-06-19T11:46:57.585296718Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-19T11:46:57.586503438Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.20189ms grafana | logger=migrator t=2025-06-19T11:46:57.592654569Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-19T11:46:57.594367762Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.712223ms grafana | logger=migrator t=2025-06-19T11:46:57.598651318Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-19T11:46:57.600437842Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.781733ms grafana | logger=migrator t=2025-06-19T11:46:57.605454246Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-19T11:46:57.605471596Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=34.511µs grafana | logger=migrator t=2025-06-19T11:46:57.610259574Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-19T11:46:57.61132833Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.067786ms grafana | logger=migrator t=2025-06-19T11:46:57.617288737Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.618406856Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.117109ms grafana | logger=migrator t=2025-06-19T11:46:57.622342822Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-19T11:46:57.622825404Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-19T11:46:57.630174976Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-19T11:46:57.630684948Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=509.522µs grafana | logger=migrator t=2025-06-19T11:46:57.635477487Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-19T11:46:57.637671671Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.203445ms grafana | logger=migrator t=2025-06-19T11:46:57.64694528Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-19T11:46:57.656606608Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.621467ms grafana | logger=migrator t=2025-06-19T11:46:57.663090608Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-19T11:46:57.66400102Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=909.972µs grafana | logger=migrator t=2025-06-19T11:46:57.668781629Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-19T11:46:57.669987938Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.202119ms grafana | logger=migrator t=2025-06-19T11:46:57.676495568Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-19T11:46:57.677877533Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.381125ms grafana | logger=migrator t=2025-06-19T11:46:57.682903447Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-19T11:46:57.684465485Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.561128ms grafana | logger=migrator t=2025-06-19T11:46:57.695481307Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:57.697061246Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.579179ms grafana | logger=migrator t=2025-06-19T11:46:57.70656699Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-19T11:46:57.706606001Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=39.491µs grafana | logger=migrator t=2025-06-19T11:46:57.717636494Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-19T11:46:57.717674875Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=39.381µs grafana | logger=migrator t=2025-06-19T11:46:57.723628581Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-19T11:46:57.733304741Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=9.67639ms grafana | logger=migrator t=2025-06-19T11:46:57.737485793Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-19T11:46:57.737945155Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=459.161µs grafana | logger=migrator t=2025-06-19T11:46:57.743200144Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-19T11:46:57.744462386Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.261892ms grafana | logger=migrator t=2025-06-19T11:46:57.749823938Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-19T11:46:57.750120396Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=293.878µs grafana | logger=migrator t=2025-06-19T11:46:57.754153615Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-19T11:46:57.755176501Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.022516ms grafana | logger=migrator t=2025-06-19T11:46:57.759364183Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-19T11:46:57.760704916Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.340123ms grafana | logger=migrator t=2025-06-19T11:46:57.767170966Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-19T11:46:57.797818053Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.642836ms grafana | logger=migrator t=2025-06-19T11:46:57.802379644Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-19T11:46:57.810099935Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.719931ms grafana | logger=migrator t=2025-06-19T11:46:57.82772091Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-19T11:46:57.827943965Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=223.425µs grafana | logger=migrator t=2025-06-19T11:46:57.833503152Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-19T11:46:57.866555848Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.053326ms grafana | logger=migrator t=2025-06-19T11:46:57.871698285Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-19T11:46:57.904943016Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.23526ms grafana | logger=migrator t=2025-06-19T11:46:57.910139784Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-19T11:46:57.911344093Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.203759ms grafana | logger=migrator t=2025-06-19T11:46:57.917012783Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-19T11:46:57.918772237Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.758894ms grafana | logger=migrator t=2025-06-19T11:46:57.923173355Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-19T11:46:57.923489614Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=316.859µs grafana | logger=migrator t=2025-06-19T11:46:57.929319007Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-19T11:46:57.930270001Z level=info msg="Migration successfully executed" id="create permission table" duration=953.223µs grafana | logger=migrator t=2025-06-19T11:46:57.93592087Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-19T11:46:57.937005466Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.084056ms grafana | logger=migrator t=2025-06-19T11:46:57.954784006Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-19T11:46:57.956497087Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.712291ms grafana | logger=migrator t=2025-06-19T11:46:57.960970928Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-19T11:46:57.962606209Z level=info msg="Migration successfully executed" id="create role table" duration=1.640151ms grafana | logger=migrator t=2025-06-19T11:46:57.969762125Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-19T11:46:57.980149521Z level=info msg="Migration successfully executed" id="add column display_name" duration=10.384056ms grafana | logger=migrator t=2025-06-19T11:46:57.984708093Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-19T11:46:57.991329127Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.619584ms grafana | logger=migrator t=2025-06-19T11:46:57.996893515Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-19T11:46:57.997993241Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.099046ms grafana | logger=migrator t=2025-06-19T11:46:58.00481055Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-19T11:46:58.006798409Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.987219ms grafana | logger=migrator t=2025-06-19T11:46:58.01123337Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:58.012388008Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.153978ms grafana | logger=migrator t=2025-06-19T11:46:58.016365437Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-19T11:46:58.017351501Z level=info msg="Migration successfully executed" id="create team role table" duration=985.794µs grafana | logger=migrator t=2025-06-19T11:46:58.026074297Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-19T11:46:58.028211901Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=2.139973ms grafana | logger=migrator t=2025-06-19T11:46:58.033354808Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-19T11:46:58.034525917Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.20363ms grafana | logger=migrator t=2025-06-19T11:46:58.039208943Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-19T11:46:58.041052389Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.842976ms grafana | logger=migrator t=2025-06-19T11:46:58.047387336Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-19T11:46:58.048270988Z level=info msg="Migration successfully executed" id="create user role table" duration=883.072µs grafana | logger=migrator t=2025-06-19T11:46:58.051821896Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-19T11:46:58.053678322Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.855937ms grafana | logger=migrator t=2025-06-19T11:46:58.058257536Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-19T11:46:58.059426164Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.168408ms grafana | logger=migrator t=2025-06-19T11:46:58.075101783Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-19T11:46:58.079387819Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=4.285356ms grafana | logger=migrator t=2025-06-19T11:46:58.095809776Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-19T11:46:58.097359765Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.554759ms grafana | logger=migrator t=2025-06-19T11:46:58.102393649Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-19T11:46:58.103994569Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.59704ms grafana | logger=migrator t=2025-06-19T11:46:58.110850609Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-19T11:46:58.112285785Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.434156ms grafana | logger=migrator t=2025-06-19T11:46:58.115882064Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-19T11:46:58.126757534Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.87558ms grafana | logger=migrator t=2025-06-19T11:46:58.13665241Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-19T11:46:58.138611057Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.943488ms grafana | logger=migrator t=2025-06-19T11:46:58.144334949Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-19T11:46:58.145454938Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.119959ms grafana | logger=migrator t=2025-06-19T11:46:58.149518488Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:58.150957414Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.439026ms grafana | logger=migrator t=2025-06-19T11:46:58.15683581Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-19T11:46:58.158207993Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.371483ms grafana | logger=migrator t=2025-06-19T11:46:58.163888034Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-19T11:46:58.165601657Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.719223ms grafana | logger=migrator t=2025-06-19T11:46:58.171611686Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-19T11:46:58.17295427Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.343134ms grafana | logger=migrator t=2025-06-19T11:46:58.17819736Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-19T11:46:58.187132251Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.934101ms grafana | logger=migrator t=2025-06-19T11:46:58.191339095Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-19T11:46:58.199830575Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.4896ms grafana | logger=migrator t=2025-06-19T11:46:58.213260319Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-19T11:46:58.224539458Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=11.279319ms grafana | logger=migrator t=2025-06-19T11:46:58.230221439Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-19T11:46:58.239936941Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.713841ms grafana | logger=migrator t=2025-06-19T11:46:58.245837157Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-19T11:46:58.247279112Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.446525ms grafana | logger=migrator t=2025-06-19T11:46:58.253858596Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-19T11:46:58.255124067Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.267151ms grafana | logger=migrator t=2025-06-19T11:46:58.262851509Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-19T11:46:58.264229952Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.376314ms grafana | logger=migrator t=2025-06-19T11:46:58.270615951Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-19T11:46:58.279553553Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=8.937583ms grafana | logger=migrator t=2025-06-19T11:46:58.284176837Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-19T11:46:58.285133921Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=956.574µs grafana | logger=migrator t=2025-06-19T11:46:58.289428227Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-19T11:46:58.290612767Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.1846ms grafana | logger=migrator t=2025-06-19T11:46:58.295743473Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-19T11:46:58.296716268Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=974.345µs grafana | logger=migrator t=2025-06-19T11:46:58.302215124Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-19T11:46:58.303445205Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.22978ms grafana | logger=migrator t=2025-06-19T11:46:58.308412478Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-19T11:46:58.308632393Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=220.715µs grafana | logger=migrator t=2025-06-19T11:46:58.31536211Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-19T11:46:58.316352615Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=990.065µs grafana | logger=migrator t=2025-06-19T11:46:58.320307923Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-19T11:46:58.320356104Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=48.641µs grafana | logger=migrator t=2025-06-19T11:46:58.35284129Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-19T11:46:58.353733421Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=902.202µs grafana | logger=migrator t=2025-06-19T11:46:58.360696525Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-19T11:46:58.36130951Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=613.015µs grafana | logger=migrator t=2025-06-19T11:46:58.365733999Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-19T11:46:58.366416866Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=682.797µs grafana | logger=migrator t=2025-06-19T11:46:58.371157153Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-19T11:46:58.37142817Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=270.857µs grafana | logger=migrator t=2025-06-19T11:46:58.375520502Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-19T11:46:58.376375653Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=855.411µs grafana | logger=migrator t=2025-06-19T11:46:58.382858994Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-19T11:46:58.384238288Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.378784ms grafana | logger=migrator t=2025-06-19T11:46:58.388079043Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-19T11:46:58.389181671Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.102098ms grafana | logger=migrator t=2025-06-19T11:46:58.392605715Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-19T11:46:58.400816159Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.209754ms grafana | logger=migrator t=2025-06-19T11:46:58.40651384Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-19T11:46:58.406535141Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=21.981µs grafana | logger=migrator t=2025-06-19T11:46:58.409636608Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-19T11:46:58.41056577Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=928.492µs grafana | logger=migrator t=2025-06-19T11:46:58.415816341Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-19T11:46:58.417513283Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.696573ms grafana | logger=migrator t=2025-06-19T11:46:58.429020008Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-19T11:46:58.430650149Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.634211ms grafana | logger=migrator t=2025-06-19T11:46:58.439115519Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-19T11:46:58.448296226Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.180617ms grafana | logger=migrator t=2025-06-19T11:46:58.453180938Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.454625733Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.446835ms grafana | logger=migrator t=2025-06-19T11:46:58.465880372Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.466817806Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=936.574µs grafana | logger=migrator t=2025-06-19T11:46:58.470587309Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:58.495689031Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.095852ms grafana | logger=migrator t=2025-06-19T11:46:58.509561145Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-19T11:46:58.510772035Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.21031ms grafana | logger=migrator t=2025-06-19T11:46:58.515284607Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.516423655Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.138958ms grafana | logger=migrator t=2025-06-19T11:46:58.520857785Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.521924762Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.066526ms grafana | logger=migrator t=2025-06-19T11:46:58.528754741Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-19T11:46:58.529853438Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.098427ms grafana | logger=migrator t=2025-06-19T11:46:58.537636152Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:58.537892658Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=256.777µs grafana | logger=migrator t=2025-06-19T11:46:58.542922633Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:58.543755283Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=873.471µs grafana | logger=migrator t=2025-06-19T11:46:58.549719811Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-19T11:46:58.556580301Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.86047ms grafana | logger=migrator t=2025-06-19T11:46:58.562511548Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-19T11:46:58.570969448Z level=info msg="Migration successfully executed" id="add type column" duration=8.45747ms grafana | logger=migrator t=2025-06-19T11:46:58.593504567Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-19T11:46:58.594904861Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.400844ms grafana | logger=migrator t=2025-06-19T11:46:58.601790932Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-19T11:46:58.602796197Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.005015ms grafana | logger=migrator t=2025-06-19T11:46:58.610811136Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.611305478Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.617123672Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.617953572Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.623326246Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-19T11:46:58.624314221Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=987.205µs grafana | logger=migrator t=2025-06-19T11:46:58.631007646Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-19T11:46:58.632314619Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.303313ms grafana | logger=migrator t=2025-06-19T11:46:58.637947979Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.639659211Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.693051ms grafana | logger=migrator t=2025-06-19T11:46:58.647629088Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-19T11:46:58.648568621Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=939.663µs grafana | logger=migrator t=2025-06-19T11:46:58.654770716Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.656415466Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.684401ms grafana | logger=migrator t=2025-06-19T11:46:58.663458851Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.664535258Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.075687ms grafana | logger=migrator t=2025-06-19T11:46:58.670425724Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-19T11:46:58.671304416Z level=info msg="Migration successfully executed" id="Drop public config table" duration=878.063µs grafana | logger=migrator t=2025-06-19T11:46:58.674932586Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-19T11:46:58.676200287Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.267191ms grafana | logger=migrator t=2025-06-19T11:46:58.6835804Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.685169319Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.592419ms grafana | logger=migrator t=2025-06-19T11:46:58.694128142Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:58.695102206Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=971.383µs grafana | logger=migrator t=2025-06-19T11:46:58.698948531Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-19T11:46:58.700283074Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.334074ms grafana | logger=migrator t=2025-06-19T11:46:58.721234403Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-19T11:46:58.745090745Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.857112ms grafana | logger=migrator t=2025-06-19T11:46:58.753870843Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-19T11:46:58.763724037Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.853784ms grafana | logger=migrator t=2025-06-19T11:46:58.767628294Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-19T11:46:58.776514624Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.88522ms grafana | logger=migrator t=2025-06-19T11:46:58.781958299Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-19T11:46:58.782281837Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=337.098µs grafana | logger=migrator t=2025-06-19T11:46:58.786149964Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-19T11:46:58.795016813Z level=info msg="Migration successfully executed" id="add share column" duration=8.86592ms grafana | logger=migrator t=2025-06-19T11:46:58.798959701Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-19T11:46:58.799198716Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=238.145µs grafana | logger=migrator t=2025-06-19T11:46:58.803154675Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-19T11:46:58.803939174Z level=info msg="Migration successfully executed" id="create file table" duration=783.909µs grafana | logger=migrator t=2025-06-19T11:46:58.809994704Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-19T11:46:58.811603425Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.60849ms grafana | logger=migrator t=2025-06-19T11:46:58.815958112Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-19T11:46:58.817175163Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.216121ms grafana | logger=migrator t=2025-06-19T11:46:58.822033332Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-19T11:46:58.823689114Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.655062ms grafana | logger=migrator t=2025-06-19T11:46:58.846192262Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-19T11:46:58.847696449Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.504156ms grafana | logger=migrator t=2025-06-19T11:46:58.853077672Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-19T11:46:58.853108393Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=31.371µs grafana | logger=migrator t=2025-06-19T11:46:58.857443911Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-19T11:46:58.857467751Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=24.29µs grafana | logger=migrator t=2025-06-19T11:46:58.861402088Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-19T11:46:58.862265361Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=863.352µs grafana | logger=migrator t=2025-06-19T11:46:58.866744961Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-19T11:46:58.866951346Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=205.875µs grafana | logger=migrator t=2025-06-19T11:46:58.8723435Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-19T11:46:58.874667698Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.324118ms grafana | logger=migrator t=2025-06-19T11:46:58.879021036Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-19T11:46:58.888556352Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.534896ms grafana | logger=migrator t=2025-06-19T11:46:58.892341246Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-19T11:46:58.892460368Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=118.633µs grafana | logger=migrator t=2025-06-19T11:46:58.900457797Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-19T11:46:58.902367075Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.908987ms grafana | logger=migrator t=2025-06-19T11:46:58.906754724Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-19T11:46:58.907137643Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=383.389µs grafana | logger=migrator t=2025-06-19T11:46:58.911003699Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-19T11:46:58.911223424Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=219.555µs grafana | logger=migrator t=2025-06-19T11:46:58.915414868Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-19T11:46:58.916198068Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=782.739µs grafana | logger=migrator t=2025-06-19T11:46:58.922070373Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-19T11:46:58.933599899Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.530366ms grafana | logger=migrator t=2025-06-19T11:46:58.937514726Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-19T11:46:58.947063113Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.547297ms grafana | logger=migrator t=2025-06-19T11:46:58.951228116Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-19T11:46:58.952385205Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.157449ms grafana | logger=migrator t=2025-06-19T11:46:58.970515164Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-19T11:46:59.047599151Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.076077ms grafana | logger=migrator t=2025-06-19T11:46:59.054597575Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-19T11:46:59.055735914Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.138889ms grafana | logger=migrator t=2025-06-19T11:46:59.060273146Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-19T11:46:59.06242953Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.155904ms grafana | logger=migrator t=2025-06-19T11:46:59.067843585Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-19T11:46:59.097250868Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=29.403583ms grafana | logger=migrator t=2025-06-19T11:46:59.106042357Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-19T11:46:59.115109103Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=9.031985ms grafana | logger=migrator t=2025-06-19T11:46:59.120867776Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-19T11:46:59.121730598Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=869.182µs grafana | logger=migrator t=2025-06-19T11:46:59.129114912Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-19T11:46:59.129406569Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=292.067µs grafana | logger=migrator t=2025-06-19T11:46:59.135725076Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-19T11:46:59.136110946Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=391.48µs grafana | logger=migrator t=2025-06-19T11:46:59.140553247Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-19T11:46:59.140955847Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=399.8µs grafana | logger=migrator t=2025-06-19T11:46:59.146557006Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-19T11:46:59.146908795Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=351.559µs grafana | logger=migrator t=2025-06-19T11:46:59.151362246Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-19T11:46:59.152865403Z level=info msg="Migration successfully executed" id="create folder table" duration=1.503577ms grafana | logger=migrator t=2025-06-19T11:46:59.156915685Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-19T11:46:59.158041322Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.124737ms grafana | logger=migrator t=2025-06-19T11:46:59.164518394Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-19T11:46:59.165633502Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.117868ms grafana | logger=migrator t=2025-06-19T11:46:59.174556674Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-19T11:46:59.174910403Z level=info msg="Migration successfully executed" id="Update folder title length" duration=359.798µs grafana | logger=migrator t=2025-06-19T11:46:59.179931538Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-19T11:46:59.182218415Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.288947ms grafana | logger=migrator t=2025-06-19T11:46:59.189089966Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-19T11:46:59.190729877Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.643671ms grafana | logger=migrator t=2025-06-19T11:46:59.196996303Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-19T11:46:59.198677765Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.683522ms grafana | logger=migrator t=2025-06-19T11:46:59.204381147Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-19T11:46:59.205242138Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=865.031µs grafana | logger=migrator t=2025-06-19T11:46:59.227327009Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-19T11:46:59.227964504Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=632.285µs grafana | logger=migrator t=2025-06-19T11:46:59.234888227Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-19T11:46:59.23819842Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=3.312073ms grafana | logger=migrator t=2025-06-19T11:46:59.243139083Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-19T11:46:59.244580658Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.441775ms grafana | logger=migrator t=2025-06-19T11:46:59.248998979Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-19T11:46:59.250865955Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.865696ms grafana | logger=migrator t=2025-06-19T11:46:59.258641129Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-19T11:46:59.26070167Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.070982ms grafana | logger=migrator t=2025-06-19T11:46:59.26547761Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-19T11:46:59.267172481Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.697432ms grafana | logger=migrator t=2025-06-19T11:46:59.27273083Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-19T11:46:59.27394552Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.21551ms grafana | logger=migrator t=2025-06-19T11:46:59.278747209Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-19T11:46:59.280284248Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.539609ms grafana | logger=migrator t=2025-06-19T11:46:59.286873842Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-19T11:46:59.289437816Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.579334ms grafana | logger=migrator t=2025-06-19T11:46:59.294991574Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-19T11:46:59.296266467Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.276073ms grafana | logger=migrator t=2025-06-19T11:46:59.300911392Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-19T11:46:59.301953538Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.038326ms grafana | logger=migrator t=2025-06-19T11:46:59.307702631Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-19T11:46:59.308991593Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.289482ms grafana | logger=migrator t=2025-06-19T11:46:59.317083305Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-19T11:46:59.318602163Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.522148ms grafana | logger=migrator t=2025-06-19T11:46:59.323163406Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-19T11:46:59.323623747Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=462.411µs grafana | logger=migrator t=2025-06-19T11:46:59.327940625Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-19T11:46:59.337265237Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.314742ms grafana | logger=migrator t=2025-06-19T11:46:59.354317932Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-19T11:46:59.355596995Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.283193ms grafana | logger=migrator t=2025-06-19T11:46:59.360996659Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-19T11:46:59.36102262Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=26.411µs grafana | logger=migrator t=2025-06-19T11:46:59.365208364Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-19T11:46:59.366522247Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.313183ms grafana | logger=migrator t=2025-06-19T11:46:59.371008478Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-19T11:46:59.371028018Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=20.54µs grafana | logger=migrator t=2025-06-19T11:46:59.376040714Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-19T11:46:59.378224978Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.184985ms grafana | logger=migrator t=2025-06-19T11:46:59.385601772Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-19T11:46:59.386849383Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.246741ms grafana | logger=migrator t=2025-06-19T11:46:59.395905238Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-19T11:46:59.397424047Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.518189ms grafana | logger=migrator t=2025-06-19T11:46:59.40317797Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-19T11:46:59.40518926Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.01031ms grafana | logger=migrator t=2025-06-19T11:46:59.409835155Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-19T11:46:59.410940434Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.107889ms grafana | logger=migrator t=2025-06-19T11:46:59.415973528Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-19T11:46:59.416350767Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=376.889µs grafana | logger=migrator t=2025-06-19T11:46:59.422006729Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-19T11:46:59.423336962Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=1.329243ms grafana | logger=migrator t=2025-06-19T11:46:59.429235509Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-19T11:46:59.430649675Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.417816ms grafana | logger=migrator t=2025-06-19T11:46:59.436531121Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-19T11:46:59.437490545Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=959.744µs grafana | logger=migrator t=2025-06-19T11:46:59.442888899Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-19T11:46:59.452457077Z level=info msg="Migration successfully executed" id="add stack_id column" duration=9.567228ms grafana | logger=migrator t=2025-06-19T11:46:59.458791906Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-19T11:46:59.467851231Z level=info msg="Migration successfully executed" id="add region_slug column" duration=9.058795ms grafana | logger=migrator t=2025-06-19T11:46:59.47984324Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-19T11:46:59.492850454Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=12.999784ms grafana | logger=migrator t=2025-06-19T11:46:59.497722706Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-19T11:46:59.505180151Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.454855ms grafana | logger=migrator t=2025-06-19T11:46:59.510654947Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-19T11:46:59.511029516Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=377.989µs grafana | logger=migrator t=2025-06-19T11:46:59.517009925Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-19T11:46:59.518408571Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.401266ms grafana | logger=migrator t=2025-06-19T11:46:59.522587944Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-19T11:46:59.533328582Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.743038ms grafana | logger=migrator t=2025-06-19T11:46:59.538081611Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-19T11:46:59.538377818Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=296.397µs grafana | logger=migrator t=2025-06-19T11:46:59.544184023Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-19T11:46:59.545418404Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=1.233741ms grafana | logger=migrator t=2025-06-19T11:46:59.550667464Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:59.575626056Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=24.958502ms grafana | logger=migrator t=2025-06-19T11:46:59.580763844Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-19T11:46:59.581503482Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=739.228µs grafana | logger=migrator t=2025-06-19T11:46:59.586297902Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:59.587173043Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=874.661µs grafana | logger=migrator t=2025-06-19T11:46:59.607245894Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:59.607769917Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=523.733µs grafana | logger=migrator t=2025-06-19T11:46:59.612836653Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:59.61432679Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.495527ms grafana | logger=migrator t=2025-06-19T11:46:59.618953376Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-19T11:46:59.647303652Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=28.350886ms grafana | logger=migrator t=2025-06-19T11:46:59.652671386Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-19T11:46:59.653507026Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=835.31µs grafana | logger=migrator t=2025-06-19T11:46:59.658029929Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-19T11:46:59.659371272Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.339933ms grafana | logger=migrator t=2025-06-19T11:46:59.666462309Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-19T11:46:59.66689079Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=428.541µs grafana | logger=migrator t=2025-06-19T11:46:59.672202542Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-19T11:46:59.673514595Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.311312ms grafana | logger=migrator t=2025-06-19T11:46:59.677625317Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-19T11:46:59.689534564Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.908867ms grafana | logger=migrator t=2025-06-19T11:46:59.697738158Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-19T11:46:59.708187089Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=10.448811ms grafana | logger=migrator t=2025-06-19T11:46:59.712545107Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-19T11:46:59.723116621Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.570724ms grafana | logger=migrator t=2025-06-19T11:46:59.73954081Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-19T11:46:59.753499228Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=13.959368ms grafana | logger=migrator t=2025-06-19T11:46:59.759358664Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-19T11:46:59.771116656Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=11.757463ms grafana | logger=migrator t=2025-06-19T11:46:59.776455699Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-19T11:46:59.784370947Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=7.913768ms grafana | logger=migrator t=2025-06-19T11:46:59.790152391Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-19T11:46:59.792435248Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=2.285757ms grafana | logger=migrator t=2025-06-19T11:46:59.798614122Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-19T11:46:59.835181262Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=36.563241ms grafana | logger=migrator t=2025-06-19T11:46:59.840445054Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-19T11:46:59.848181287Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=7.734203ms grafana | logger=migrator t=2025-06-19T11:46:59.872241076Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-19T11:46:59.883905346Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=11.664711ms grafana | logger=migrator t=2025-06-19T11:46:59.887955477Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-19T11:46:59.895787583Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=7.830556ms grafana | logger=migrator t=2025-06-19T11:46:59.902698424Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-19T11:46:59.912369266Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=9.669792ms grafana | logger=migrator t=2025-06-19T11:46:59.916937709Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-19T11:46:59.917046682Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=109.823µs grafana | logger=migrator t=2025-06-19T11:46:59.921672627Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-19T11:46:59.921754999Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=78.822µs grafana | logger=migrator t=2025-06-19T11:46:59.92980146Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:46:59.940804904Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.004634ms grafana | logger=migrator t=2025-06-19T11:46:59.944413794Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:46:59.953706745Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.292251ms grafana | logger=migrator t=2025-06-19T11:46:59.957241614Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-19T11:46:59.957579572Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=337.238µs grafana | logger=migrator t=2025-06-19T11:46:59.963673894Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-19T11:46:59.964184926Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=510.532µs grafana | logger=migrator t=2025-06-19T11:46:59.968495804Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:46:59.98037262Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=11.872456ms grafana | logger=migrator t=2025-06-19T11:46:59.984791559Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:46:59.993282051Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=8.489382ms grafana | logger=migrator t=2025-06-19T11:47:00.002130252Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-19T11:47:00.012071606Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=9.940224ms grafana | logger=migrator t=2025-06-19T11:47:00.017618741Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-19T11:47:00.025916264Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=8.296533ms grafana | logger=migrator t=2025-06-19T11:47:00.03310953Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-19T11:47:00.033773376Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=660.766µs grafana | logger=migrator t=2025-06-19T11:47:00.037718632Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:47:00.047593594Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.874172ms grafana | logger=migrator t=2025-06-19T11:47:00.054886832Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:47:00.065636445Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=10.748943ms grafana | logger=migrator t=2025-06-19T11:47:00.06950863Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-19T11:47:00.069876969Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=367.589µs grafana | logger=migrator t=2025-06-19T11:47:00.07403094Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-19T11:47:00.074680505Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=648.315µs grafana | logger=migrator t=2025-06-19T11:47:00.081071932Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-19T11:47:00.082397454Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.325382ms grafana | logger=migrator t=2025-06-19T11:47:00.09203031Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-19T11:47:00.092135093Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=105.713µs grafana | logger=migrator t=2025-06-19T11:47:00.103341167Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-19T11:47:00.103407148Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=66.552µs grafana | logger=migrator t=2025-06-19T11:47:00.107438736Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-19T11:47:00.108041741Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=602.535µs grafana | logger=migrator t=2025-06-19T11:47:00.131228508Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:47:00.138969716Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=7.740868ms grafana | logger=migrator t=2025-06-19T11:47:00.142689908Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:47:00.149805401Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=7.114563ms grafana | logger=migrator t=2025-06-19T11:47:00.155806718Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-19T11:47:00.156938666Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=1.131318ms grafana | logger=migrator t=2025-06-19T11:47:00.165334031Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-19T11:47:00.166573781Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.23939ms grafana | logger=migrator t=2025-06-19T11:47:00.172296421Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-19T11:47:00.184596872Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=12.300461ms grafana | logger=migrator t=2025-06-19T11:47:00.188159539Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:47:00.195646242Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.484913ms grafana | logger=migrator t=2025-06-19T11:47:00.202060689Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-19T11:47:00.20213027Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-19T11:47:00.202432808Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-19T11:47:00.202491339Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=429.42µs grafana | logger=migrator t=2025-06-19T11:47:00.206484297Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-19T11:47:00.207117302Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=632.125µs grafana | logger=migrator t=2025-06-19T11:47:00.214500772Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-19T11:47:00.216342778Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.841576ms grafana | logger=migrator t=2025-06-19T11:47:00.221102964Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-19T11:47:00.222417486Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.313862ms grafana | logger=migrator t=2025-06-19T11:47:00.229797137Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-19T11:47:00.231717284Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.922558ms grafana | logger=migrator t=2025-06-19T11:47:00.235887555Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-19T11:47:00.237901685Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.01309ms grafana | logger=migrator t=2025-06-19T11:47:00.249768504Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-19T11:47:00.261572543Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.801609ms grafana | logger=migrator t=2025-06-19T11:47:00.266430692Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:47:00.281441028Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=15.014006ms grafana | logger=migrator t=2025-06-19T11:47:00.288917301Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-19T11:47:00.299409948Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=10.487917ms grafana | logger=migrator t=2025-06-19T11:47:00.303842836Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-19T11:47:00.314760022Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=10.916616ms grafana | logger=migrator t=2025-06-19T11:47:00.31915637Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-19T11:47:00.319515379Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-19T11:47:00.31953905Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=382.519µs grafana | logger=migrator t=2025-06-19T11:47:00.324932101Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-19T11:47:00.326275634Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.343213ms grafana | logger=migrator t=2025-06-19T11:47:00.331287086Z level=info msg="migrations completed" performed=654 skipped=0 duration=7.092426697s grafana | logger=migrator t=2025-06-19T11:47:00.332460366Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-19T11:47:00.349395909Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-19T11:47:00.349771999Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-19T11:47:00.36456991Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T11:47:00.490931058Z level=info msg="Restored cache from database" duration=632.686µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.503227548Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-19T11:47:00.503242468Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-19T11:47:00.514978576Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-19T11:47:00.515912468Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=933.592µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.522319554Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-19T11:47:00.522335085Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=16.331µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.52744817Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-19T11:47:00.527722697Z level=info msg="Migration successfully executed" id="drop table resource" duration=270.126µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.53522573Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-19T11:47:00.537058585Z level=info msg="Migration successfully executed" id="create table resource" duration=1.832155ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.541187246Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-19T11:47:00.54298485Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.798024ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.546847474Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.546933146Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=84.272µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.555144667Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.556951851Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.806354ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.561104472Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-19T11:47:00.562763243Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.657391ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.567036587Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-19T11:47:00.568262918Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.226031ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.574770576Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-19T11:47:00.574897449Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=127.143µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.578565939Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-19T11:47:00.579896762Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.331853ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.584214917Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-19T11:47:00.585500499Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.285172ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.589581818Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-19T11:47:00.58966736Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=83.582µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.595565644Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-19T11:47:00.596800755Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.235081ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.600506245Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-19T11:47:00.602517605Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.010489ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.609211018Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-19T11:47:00.610560991Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.349613ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.623036066Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.634583268Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=11.545962ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.639248192Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-19T11:47:00.649168895Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.914253ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.657804026Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-19T11:47:00.659293642Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.489956ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.663405243Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-19T11:47:00.665246487Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.839194ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.669595384Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.682258153Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.663059ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.686046186Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-19T11:47:00.695354913Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=9.307387ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.701694648Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-19T11:47:00.701719738Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-19T11:47:00.702327864Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=632.476µs grafana | logger=resource-migrator t=2025-06-19T11:47:00.708107845Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-19T11:47:00.710726149Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.616954ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.71488421Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.726531825Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.649275ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.751409753Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-19T11:47:00.752886719Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=1.476096ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.757683666Z level=info msg="migrations completed" performed=26 skipped=0 duration=242.759802ms grafana | logger=resource-migrator t=2025-06-19T11:47:00.758356493Z level=info msg="Unlocking database" grafana | t=2025-06-19T11:47:00.758710292Z level=info caller=logger.go:214 time=2025-06-19T11:47:00.758685331Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-19T11:47:00.769886965Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-19T11:47:00.824517949Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-19T11:47:00.82455313Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-19T11:47:00.824582951Z level=info msg="Plugins loaded" count=53 duration=54.697196ms grafana | logger=query_data t=2025-06-19T11:47:00.829413619Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-19T11:47:00.834576445Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-19T11:47:00.852313849Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-19T11:47:00.873149728Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-19T11:47:00.873185199Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-19T11:47:00.87774171Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=ngalert.state.manager t=2025-06-19T11:47:00.878975111Z level=info msg="Warming state cache for startup" grafana | logger=grafanaStorageLogger t=2025-06-19T11:47:00.880469367Z level=info msg="Storage starting" grafana | logger=http.server t=2025-06-19T11:47:00.88103887Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=ngalert.multiorg.alertmanager t=2025-06-19T11:47:00.881094082Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:00.881731018Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=ngalert.state.manager t=2025-06-19T11:47:00.902854754Z level=info msg="State cache has been initialized" states=0 duration=23.877212ms grafana | logger=ngalert.scheduler t=2025-06-19T11:47:00.902900735Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-19T11:47:00.902956146Z level=info msg=starting first_tick=2025-06-19T11:47:10Z grafana | logger=provisioning.datasources t=2025-06-19T11:47:00.984909138Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=grafana.update.checker t=2025-06-19T11:47:00.994604146Z level=info msg="Update check succeeded" duration=93.835574ms grafana | logger=sqlstore.transactions t=2025-06-19T11:47:01.000231113Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=plugins.update.checker t=2025-06-19T11:47:01.007901121Z level=info msg="Update check succeeded" duration=104.808512ms grafana | logger=provisioning.alerting t=2025-06-19T11:47:01.008565177Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-19T11:47:01.008592258Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-19T11:47:01.02914283Z level=info msg="starting to provision dashboards" grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-19T11:47:01.082113184Z level=info msg="Patterns update finished" duration=103.189852ms grafana | logger=plugin.installer t=2025-06-19T11:47:01.625672217Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-19T11:47:01.688604985Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-19T11:47:01.714877577Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:01.714919198Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=833.129029ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:01.714984519Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.774631477Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.776144754Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.778358948Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.77926339Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.780107681Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.783856833Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.784466878Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.785031531Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-19T11:47:01.785720058Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-19T11:47:01.843821548Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-19T11:47:02.00187845Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-19T11:47:02.065106845Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.3 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-19T11:47:02.084495489Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:02.084557341Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=369.560141ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:02.084703394Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=provisioning.dashboard t=2025-06-19T11:47:02.435328052Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-19T11:47:02.575231501Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-19T11:47:02.717641871Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.18 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-19T11:47:02.748182588Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:02.748224489Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=663.479613ms grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:02.748256899Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-19T11:47:02.988736246Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-19T11:47:03.043186966Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-19T11:47:03.061984856Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-19T11:47:03.062015977Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=313.750187ms grafana | logger=infra.usagestats t=2025-06-19T11:48:34.911901465Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-19 11:46:55,576] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,576] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,577] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,580] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,584] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-19 11:46:55,588] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-19 11:46:55,597] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:55,618] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:55,618] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:55,627] INFO Socket connection established, initiating session, client: /172.17.0.5:47146, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:55,658] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000255310000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:55,782] INFO Session: 0x100000255310000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:55,783] INFO EventThread shut down for session: 0x100000255310000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-19 11:46:56,614] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-19 11:46:56,932] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-19 11:46:57,024] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-19 11:46:57,025] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-19 11:46:57,026] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-19 11:46:57,045] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 11:46:57,050] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,050] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,051] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,053] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-19 11:46:57,057] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-19 11:46:57,064] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:57,066] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 11:46:57,074] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:57,081] INFO Socket connection established, initiating session, client: /172.17.0.5:47148, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:57,101] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x100000255310001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-19 11:46:57,110] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-19 11:46:57,519] INFO Cluster ID = Oxu_XS5tS_KuYOmXHLxK8w (kafka.server.KafkaServer) kafka | [2025-06-19 11:46:57,522] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-19 11:46:57,569] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-19 11:46:57,605] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 11:46:57,605] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 11:46:57,605] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 11:46:57,608] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-19 11:46:57,642] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-19 11:46:57,644] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-19 11:46:57,658] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2025-06-19 11:46:57,658] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-19 11:46:57,660] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-19 11:46:57,670] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-19 11:46:57,716] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-19 11:46:57,731] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-19 11:46:57,745] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-19 11:46:57,792] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 11:46:58,158] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-19 11:46:58,162] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-19 11:46:58,185] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-19 11:46:58,185] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-19 11:46:58,185] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-19 11:46:58,190] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-19 11:46:58,197] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 11:46:58,216] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,218] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,220] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,221] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,235] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-19 11:46:58,270] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-19 11:46:58,308] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750333618296,1750333618296,1,0,0,72057604057137153,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-19 11:46:58,309] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-19 11:46:58,401] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-19 11:46:58,411] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,419] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,420] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,426] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-19 11:46:58,436] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,440] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,445] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:46:58,449] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-19 11:46:58,455] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:46:58,477] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-19 11:46:58,479] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-19 11:46:58,480] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,484] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-19 11:46:58,486] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,492] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,497] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-19 11:46:58,499] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,525] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,532] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,539] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-19 11:46:58,540] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-19 11:46:58,555] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-19 11:46:58,557] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,557] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,558] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,558] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,563] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,564] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,564] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,564] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-19 11:46:58,565] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,570] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-19 11:46:58,573] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-19 11:46:58,578] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 11:46:58,579] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 11:46:58,584] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 11:46:58,585] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-19 11:46:58,585] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 11:46:58,586] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 11:46:58,587] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-19 11:46:58,588] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-19 11:46:58,589] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,590] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-19 11:46:58,594] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,595] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,596] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,596] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,598] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,623] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 11:46:58,623] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-19 11:46:58,623] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 11:46:58,623] INFO Kafka startTimeMs: 1750333618609 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-19 11:46:58,626] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-19 11:46:58,690] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 11:46:58,708] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 11:46:58,715] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-19 11:47:03,625] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:03,626] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:35,426] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-19 11:47:35,431] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:35,432] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-19 11:47:35,438] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:35,477] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(rQhcrnMrSfCoiQzL5IxZYA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(z4Vp8YMrS7ipvPoszx-lQg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:35,479] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-19 11:47:35,481] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,481] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,482] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,483] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,484] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,485] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,486] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:47:35,486] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,491] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,491] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,491] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,491] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,491] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,492] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,493] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,494] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:47:35,494] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,640] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,641] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,642] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,643] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,644] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,645] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:47:35,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 11:47:35,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 11:47:35,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 11:47:35,648] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 11:47:35,649] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 11:47:35,650] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 11:47:35,651] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 11:47:35,652] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 11:47:35,653] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-19 11:47:35,655] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,658] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,659] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,660] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,661] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,662] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:47:35,662] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,665] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,668] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,669] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,670] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,671] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,672] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,673] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,673] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,673] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,673] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 11:47:35,712] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 11:47:35,713] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 11:47:35,714] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 11:47:35,715] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 11:47:35,717] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-19 11:47:35,717] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2025-06-19 11:47:35,764] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,774] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,776] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,777] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,778] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,792] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,794] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,794] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,794] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,795] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,803] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,804] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,805] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,805] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,805] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,814] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,815] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,815] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,816] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,816] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,827] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,828] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,828] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,828] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,829] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,836] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,837] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,837] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,838] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,838] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,846] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,847] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,847] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,847] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,847] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,858] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,859] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,859] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,859] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,860] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,867] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,868] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,868] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,869] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,869] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,879] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,880] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,880] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,880] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,880] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,889] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,890] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,891] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,891] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,891] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,900] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,901] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,901] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,901] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,901] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,911] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,912] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,912] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,912] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,912] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,923] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,924] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,924] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,924] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,924] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,933] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,935] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,935] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,935] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,935] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,943] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,944] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,944] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,944] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,945] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,954] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,955] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,955] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,955] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,956] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,965] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,966] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,966] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,966] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,967] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,976] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,977] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,977] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,977] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,978] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:35,989] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:35,991] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:35,991] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,991] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:35,992] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,003] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,004] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,004] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,004] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,004] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,017] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,018] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,018] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,018] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,018] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,030] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,031] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,031] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,032] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,032] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,039] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,040] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,040] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,040] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,040] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,050] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,051] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,051] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,051] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,051] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,058] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,059] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,059] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,059] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,059] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,069] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,070] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,070] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,070] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,070] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,079] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,080] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,080] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,080] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,080] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,088] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,089] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,089] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,089] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,089] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,102] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,103] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,103] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,103] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,103] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,112] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,112] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,112] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,112] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,113] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,123] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,125] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,125] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,125] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,125] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,132] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,133] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,133] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,133] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,133] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,141] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,142] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,142] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,142] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,142] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,150] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,151] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,151] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,151] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,151] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(rQhcrnMrSfCoiQzL5IxZYA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,159] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,160] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,160] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,160] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,160] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,169] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,170] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,170] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,170] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,170] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,179] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,180] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,180] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,180] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,180] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,188] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,189] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,189] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,189] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,189] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,198] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,198] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,198] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,199] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,199] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,210] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,211] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,211] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,211] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,211] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,219] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,220] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,220] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,220] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,220] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,228] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,228] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,228] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,228] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,229] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,237] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,238] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,238] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,238] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,238] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,247] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,248] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,248] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,248] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,248] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,254] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,255] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,255] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,255] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,255] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,270] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,271] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,271] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,271] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,271] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,279] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,280] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,280] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,280] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,280] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,288] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,289] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,289] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,289] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,289] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,299] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,301] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,301] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,301] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,301] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,310] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:47:36,310] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-19 11:47:36,311] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,311] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:47:36,311] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(z4Vp8YMrS7ipvPoszx-lQg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-19 11:47:36,318] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-19 11:47:36,319] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-19 11:47:36,325] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,327] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,333] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,333] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,333] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,333] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,333] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,333] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,333] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,333] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,333] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,333] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,334] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,334] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,334] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,334] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,334] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,334] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,334] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,334] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,334] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,334] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,335] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,335] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,335] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,335] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,335] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,336] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,336] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,337] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:36,337] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,340] INFO [Broker id=1] Finished LeaderAndIsr request in 676ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2025-06-19 11:47:36,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 13 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,344] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=z4Vp8YMrS7ipvPoszx-lQg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=rQhcrnMrSfCoiQzL5IxZYA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 11:47:36,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,345] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,345] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,345] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,345] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,346] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,346] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,346] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,346] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,348] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 15 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,348] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,348] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,348] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,349] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,353] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 20 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,354] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,354] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,355] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,356] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,356] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,356] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,356] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,357] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,357] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,357] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-19 11:47:36,357] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,357] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,357] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,358] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 11:47:36,358] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 22 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,359] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,360] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,360] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:36,360] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-19 11:47:37,188] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 41b86375-7cd4-4a13-9e12-1ee5878a07d0 in Empty state. Created a new member id consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:37,188] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:37,207] INFO [GroupCoordinator 1]: Preparing to rebalance group 41b86375-7cd4-4a13-9e12-1ee5878a07d0 in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03 with group instance id None; client reason: need to re-join with the given member-id: consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:37,207] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:40,220] INFO [GroupCoordinator 1]: Stabilized group 41b86375-7cd4-4a13-9e12-1ee5878a07d0 generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:40,224] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:40,241] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:47:40,241] INFO [GroupCoordinator 1]: Assignment received from leader consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03 for group 41b86375-7cd4-4a13-9e12-1ee5878a07d0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:48:20,276] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-ea2e9968-66a4-49ee-901f-4fa0b4c2e4f5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:48:20,277] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-ea2e9968-66a4-49ee-901f-4fa0b4c2e4f5 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:48:23,279] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:48:23,285] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-ea2e9968-66a4-49ee-901f-4fa0b4c2e4f5 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:49:31,115] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-19 11:49:31,132] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(ajJ6nzmpQOCDE7QwvO2Hvw),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-19 11:49:31,132] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-19 11:49:31,132] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-19 11:49:31,133] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-19 11:49:31,133] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,140] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-19 11:49:31,140] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-19 11:49:31,140] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-19 11:49:31,141] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,141] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-19 11:49:31,141] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,142] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,142] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-19 11:49:31,143] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-19 11:49:31,143] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-19 11:49:31,143] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,151] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-19 11:49:31,152] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-19 11:49:31,155] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-19 11:49:31,155] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-19 11:49:31,156] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(ajJ6nzmpQOCDE7QwvO2Hvw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-19 11:49:31,159] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-19 11:49:31,160] INFO [Broker id=1] Finished LeaderAndIsr request in 18ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-19 11:49:31,160] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ajJ6nzmpQOCDE7QwvO2Hvw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 11:49:31,162] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-19 11:49:31,163] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-19 11:49:31,164] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-19 11:51:06,041] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-5219282c-dfa9-4e70-bf75-dc038c13cf9d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:06,042] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-5219282c-dfa9-4e70-bf75-dc038c13cf9d with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:09,044] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:09,047] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-5219282c-dfa9-4e70-bf75-dc038c13cf9d for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:09,165] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-5219282c-dfa9-4e70-bf75-dc038c13cf9d on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:09,167] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:09,169] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-5219282c-dfa9-4e70-bf75-dc038c13cf9d, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:31,885] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-f141bf32-b495-43d7-be55-3713ed521c76 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:31,886] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-f141bf32-b495-43d7-be55-3713ed521c76 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:34,887] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:34,891] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-f141bf32-b495-43d7-be55-3713ed521c76 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:34,900] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-f141bf32-b495-43d7-be55-3713ed521c76 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:34,900] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:34,900] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-f141bf32-b495-43d7-be55-3713ed521c76, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:57,464] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-c8225f32-686e-41ae-a3ce-c513f6eca1b5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:51:57,465] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-c8225f32-686e-41ae-a3ce-c513f6eca1b5 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:00,467] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:00,471] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-c8225f32-686e-41ae-a3ce-c513f6eca1b5 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:00,478] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-c8225f32-686e-41ae-a3ce-c513f6eca1b5 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:00,479] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:00,480] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-c8225f32-686e-41ae-a3ce-c513f6eca1b5, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.7, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-19 11:52:03,628] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-19 11:52:03,628] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-19 11:52:03,633] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2025-06-19 11:52:03,635] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-19T11:47:12.632+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-19T11:47:12.707+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-19T11:47:12.709+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-19T11:47:14.347+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-19T11:47:14.541+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 181 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-19T11:47:15.281+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-19T11:47:15.293+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-19T11:47:15.295+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-19T11:47:15.295+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-19T11:47:15.335+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-19T11:47:15.335+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2552 ms policy-api | [2025-06-19T11:47:15.686+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-19T11:47:15.777+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-19T11:47:15.828+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-19T11:47:16.242+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-19T11:47:16.287+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-19T11:47:16.505+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@59aa1d1c policy-api | [2025-06-19T11:47:16.508+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-19T11:47:16.597+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-19T11:47:18.750+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-19T11:47:18.754+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-19T11:47:19.462+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-19T11:47:20.426+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-19T11:47:21.599+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-19T11:47:21.647+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-19T11:47:22.369+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-19T11:47:22.523+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-19T11:47:22.550+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-19T11:47:22.574+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.703 seconds (process running for 11.496) policy-api | [2025-06-19T11:47:39.921+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-19T11:47:39.921+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-19T11:47:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-19T11:50:43.814+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-api | [] policy-api | [2025-06-19T11:52:00.802+00:00|WARN|CommonRestController|http-nio-6969-exec-3] "incoming fragment" INVALID, item has status INVALID policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity policy-api | policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.4) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.4) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.630543 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.676048 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.737521 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.792636 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.865643 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.924733 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:58.989544 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.035751 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.091465 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.145369 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.19461 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.250941 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.303783 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.365231 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.41686 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.471223 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.539053 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.593696 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.66049 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.716442 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.782892 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.835812 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.89177 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.945926 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:46:59.996879 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.0493 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.111237 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.173154 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.22321 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.277203 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.325137 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.389672 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.445505 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.522133 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.578619 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.645307 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.702857 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.77473 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.827071 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.897684 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.94573 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:00.997818 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.054731 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.107461 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.163 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.218356 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.279952 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.328575 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.379505 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.443054 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.510247 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.567353 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.619352 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.682823 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.737077 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.8049 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.870838 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:01.935792 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.001092 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.057778 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.111649 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.180531 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.238569 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.318583 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.385319 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.442011 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.499495 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.570118 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.624367 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.701041 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.758509 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.838036 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.895194 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:02.954081 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.006749 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.062346 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.114761 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.168288 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.220326 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.274654 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.341849 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.397416 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.44982 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.503744 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.563996 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.628501 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.683531 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.746096 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.797141 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.851524 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.905969 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:03.961607 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:04.016551 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:04.075146 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:04.139061 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1906251146580800u | 1 | 2025-06-19 11:47:04.185083 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.250159 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.306231 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.360959 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.419788 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.480482 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.541424 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.596407 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.652441 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.707035 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.766461 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.818273 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.880348 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1906251146580900u | 1 | 2025-06-19 11:47:04.931769 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:04.996709 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.064014 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.129619 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.193269 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.25506 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.310933 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.380645 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.440179 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1906251146581000u | 1 | 2025-06-19 11:47:05.508827 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1906251146581100u | 1 | 2025-06-19 11:47:05.557295 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1906251146581200u | 1 | 2025-06-19 11:47:05.623459 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1906251146581200u | 1 | 2025-06-19 11:47:05.685883 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1906251146581200u | 1 | 2025-06-19 11:47:05.748918 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1906251146581200u | 1 | 2025-06-19 11:47:05.817246 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1906251146581300u | 1 | 2025-06-19 11:47:05.877727 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1906251146581300u | 1 | 2025-06-19 11:47:05.930171 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1906251146581300u | 1 | 2025-06-19 11:47:05.988551 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:06.691148 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:06.764834 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:06.832253 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:06.898635 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:06.957918 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.026272 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.079669 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.148829 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.202636 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.259523 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.311874 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.368667 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1906251147061400u | 1 | 2025-06-19 11:47:07.433541 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.489649 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.552606 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.617937 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.679112 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.73268 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.786904 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.844826 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1906251147061500u | 1 | 2025-06-19 11:47:07.897483 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1906251147061600u | 1 | 2025-06-19 11:47:07.952725 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1906251147061600u | 1 | 2025-06-19 11:47:08.003444 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1906251147061601u | 1 | 2025-06-19 11:47:08.059479 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1906251147061601u | 1 | 2025-06-19 11:47:08.113731 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1906251147061700u | 1 | 2025-06-19 11:47:08.181504 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1906251147061700u | 1 | 2025-06-19 11:47:08.244092 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1906251147061700u | 1 | 2025-06-19 11:47:08.304758 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.365898 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.431867 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.485517 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.545177 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.607331 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.664715 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.72743 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.784828 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1906251147061701u | 1 | 2025-06-19 11:47:08.836898 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+--------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1906251147091600u | 1 | 2025-06-19 11:47:09.54144 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1906251147101600u | 1 | 2025-06-19 11:47:10.257181 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1906251147101600u | 1 | 2025-06-19 11:47:10.33093 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-opa-pdp | Waiting for kafka port 9092... policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-opa-pdp | Waiting for pap port 6969... policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="OPA-PDP: Starting initialisation " policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="KAFKA_URL not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="PAP_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="PATCH_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="PATCH_GROUPID not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="API_USER not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="API_PASSWORD not defined, using default value" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="UseSASLForKAFKA not defined, using default value" policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="Username: " policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="Password: " policy-opa-pdp | time="2025-06-19T11:48:15Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" policy-opa-pdp | time="2025-06-19T11:48:15Z" level=debug msg="Configuration module: environment initialised" policy-opa-pdp | DEBU[2025-06-19T11:48:15.2364+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug policy-opa-pdp | DEBU[2025-06-19T11:48:15.2366+00:00] Name: opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 policy-opa-pdp | DEBU[2025-06-19T11:48:15.2393+00:00] Starting OPA PDP Service policy-opa-pdp | INFO[2025-06-19T11:48:20.2436+00:00] HTTP server started policy-opa-pdp | DEBU[2025-06-19T11:48:20.2448+00:00] Create an instance of OPA Object policy-opa-pdp | DEBU[2025-06-19T11:48:20.2449+00:00] Configure an instance of OPA Object policy-opa-pdp | DEBU[2025-06-19T11:48:20.2459+00:00] Topic start :::: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-19T11:48:20.2460+00:00] Creating Kafka Consumer singleton instance policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-19T11:48:20.2482+00:00] Topic Subscribed: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-19T11:48:20.2483+00:00] Created SIngleton consumer instance policy-opa-pdp | DEBU[2025-06-19T11:48:20.2747+00:00] Starting PDP Message Listener..... policy-opa-pdp | DEBU[2025-06-19T11:48:30.2804+00:00] New Ticker started with interval 60000 policy-opa-pdp | DEBU[2025-06-19T11:48:40.2889+00:00] After registration successful delay policy-opa-pdp | 2025/06/19 11:49:30 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:49:30.2869+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"fa6b9d54-3965-426e-ae00-2b7f586b69b8","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750333770286","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:49:30.2869+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-19T11:49:30.3155+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"fa6b9d54-3965-426e-ae00-2b7f586b69b8","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750333770286","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:49:30.3158+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:30.3158+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.0345+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"96472ba6-f998-42de-881b-ca4c9cd1d966","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:49:31.0348+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:49:31.0352+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"96472ba6-f998-42de-881b-ca4c9cd1d966","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:49:31.0352+00:00] Policy Is Allowed: slice.capacity.check policy-opa-pdp | DEBU[2025-06-19T11:49:31.0352+00:00] Validating properties data for policy: slice.capacity.check policy-opa-pdp | DEBU[2025-06-19T11:49:31.0353+00:00] Validating properties policy for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-19T11:49:31.0353+00:00] Validation successful for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-19T11:49:31.0357+00:00] Directory created: /opt/policies/slice/capacity/check policy-opa-pdp | INFO[2025-06-19T11:49:31.0358+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego policy-opa-pdp | INFO[2025-06-19T11:49:31.0360+00:00] Directory created: /opt/data/node/slice/capacity/check policy-opa-pdp | INFO[2025-06-19T11:49:31.0361+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json policy-opa-pdp | DEBU[2025-06-19T11:49:31.0362+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-19T11:49:31.0631+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-19T11:49:31.0661+00:00] storage not found creating : /node policy-opa-pdp | DEBU[2025-06-19T11:49:31.0661+00:00] storage not found creating : /node/slice policy-opa-pdp | DEBU[2025-06-19T11:49:31.0663+00:00] storage not found creating : /node/slice/capacity policy-opa-pdp | DEBU[2025-06-19T11:49:31.0663+00:00] storage not found creating : /node/slice/capacity/check policy-opa-pdp | INFO[2025-06-19T11:49:31.0665+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:49:31.0665+00:00] Loaded Policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-19T11:49:31.0668+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-19T11:49:31.0670+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:49:31 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:49:31.0672+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"96472ba6-f998-42de-881b-ca4c9cd1d966","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1b087bd7-7d5e-4918-ad6a-6a6b5c9e57be","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771067","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:49:31.0673+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:49:31.0673+00:00] 120000 policy-opa-pdp | DEBU[2025-06-19T11:49:31.0676+00:00] New Ticker started with interval 120000 policy-opa-pdp | DEBU[2025-06-19T11:49:31.0774+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"96472ba6-f998-42de-881b-ca4c9cd1d966","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1b087bd7-7d5e-4918-ad6a-6a6b5c9e57be","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771067","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:49:31.0774+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.0775+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.1169+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:49:31.1171+00:00] messageType: PDP_STATE_CHANGE policy-opa-pdp | DEBU[2025-06-19T11:49:31.1172+00:00] PDP STATE CHANGE message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:49:31.1174+00:00] State change from PASSIVE To : ACTIVE policy-opa-pdp | INFO[2025-06-19T11:49:31.1175+00:00] Sending PDP Status With State Change response policy-opa-pdp | 2025/06/19 11:49:31 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:49:31.1177+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"6b2de463-ec0a-476c-a188-1c4336f6425c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771117","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:49:31.1178+00:00] PDP_STATUS With State Change Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:49:31.1293+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"6b2de463-ec0a-476c-a188-1c4336f6425c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771117","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:49:31.1294+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.1295+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.5002+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","timestampMs":1750333771479,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:49:31.5005+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:49:31.5008+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","timestampMs":1750333771479,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-19T11:49:31.5010+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:49:31 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:49:31.5013+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"8bfed5cc-d247-4eaa-bdf2-33c4e79a0b14","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771501","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:49:31.5015+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:49:31.5016+00:00] 120000 policy-opa-pdp | DEBU[2025-06-19T11:49:31.5094+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"8bfed5cc-d247-4eaa-bdf2-33c4e79a0b14","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771501","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:49:31.5096+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:49:31.5097+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/19 11:50:30 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:50:30.2890+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"dd72965c-32f7-4d29-ba7c-196526049050","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333830288","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:50:30.2890+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-19T11:50:30.2981+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"dd72965c-32f7-4d29-ba7c-196526049050","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333830288","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:50:30.2982+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:50:30.2982+00:00] discarding event of type PDP_STATUS policy-opa-pdp | WARN[2025-06-19T11:50:43.5793+00:00] Invalid or Missing Request ID policy-opa-pdp | DEBU[2025-06-19T11:50:43.5794+00:00] Received Health Check message policy-opa-pdp | INFO[2025-06-19T11:50:43.5880+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:50:43.5881+00:00] datapath to get Data : / policy-opa-pdp | DEBU[2025-06-19T11:50:43.5883+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} policy-opa-pdp | DEBU[2025-06-19T11:50:44.9927+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ab4b0501-e50d-4de9-a593-4d933a804efd","timestampMs":1750333844942,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:50:44.9928+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:50:44.9930+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ab4b0501-e50d-4de9-a593-4d933a804efd","timestampMs":1750333844942,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:50:44.9930+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:50:44.9930+00:00] Policy is new and should be deployed: zoneB 1.0.6 policy-opa-pdp | DEBU[2025-06-19T11:50:44.9931+00:00] Policy Is Allowed: zoneB policy-opa-pdp | DEBU[2025-06-19T11:50:44.9931+00:00] Validating properties data for policy: zoneB policy-opa-pdp | DEBU[2025-06-19T11:50:44.9931+00:00] Validating properties policy for policy: zoneB policy-opa-pdp | INFO[2025-06-19T11:50:44.9931+00:00] Validation successful for policy: zoneB policy-opa-pdp | INFO[2025-06-19T11:50:44.9932+00:00] Directory created: /opt/policies/zoneB policy-opa-pdp | INFO[2025-06-19T11:50:44.9933+00:00] Policy file saved: /opt/policies/zoneB/policy.rego policy-opa-pdp | INFO[2025-06-19T11:50:44.9933+00:00] Directory created: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-19T11:50:44.9934+00:00] Data file saved: /opt/data/node/zoneB/data.json policy-opa-pdp | DEBU[2025-06-19T11:50:44.9934+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-19T11:50:45.0183+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-19T11:50:45.0212+00:00] storage not found creating : /node/zoneB policy-opa-pdp | INFO[2025-06-19T11:50:45.0213+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "zoneB", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:50:45.0213+00:00] Loaded Policy: zoneB policy-opa-pdp | INFO[2025-06-19T11:50:45.0213+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-19T11:50:45.0214+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:50:45 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:50:45.0214+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ab4b0501-e50d-4de9-a593-4d933a804efd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"78a2516e-5213-461d-89ff-6d07ea7af794","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333845021","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:50:45.0215+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:50:45.0215+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:50:45.0308+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ab4b0501-e50d-4de9-a593-4d933a804efd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"78a2516e-5213-461d-89ff-6d07ea7af794","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333845021","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:50:45.0309+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:50:45.0309+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-19T11:51:09.1889+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:09.1890+00:00] datapath to get Data : /node/zoneB/zone policy-opa-pdp | DEBU[2025-06-19T11:51:09.1890+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} policy-opa-pdp | DEBU[2025-06-19T11:51:09.2030+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:09.2030+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:09.2034+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:51:09.2034+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"8ad47b8b-92d6-4da5-acf6-06d6474a257d","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":420,"timer_rego_query_compile_ns":89902,"timer_rego_query_eval_ns":326369,"timer_rego_query_parse_ns":57261,"timer_sdk_decision_eval_ns":588135},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T11:51:09Z","timestamp":"2025-06-19T11:51:09.203512476Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-19T11:51:09.2044+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "8ad47b8b-92d6-4da5-acf6-06d6474a257d", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:09.2119+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:09.2120+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:09.2124+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-19T11:51:09.2127+00:00] Policy Name zoeB does not exist policy-opa-pdp | DEBU[2025-06-19T11:51:09.2193+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:09.2193+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:09.2197+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:51:09.2198+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-19T11:51:09.2209+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "813d32c0-9997-4dbf-8f3d-012b4bc1d6d0", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | {"decision_id":"813d32c0-9997-4dbf-8f3d-012b4bc1d6d0","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":850,"timer_rego_query_eval_ns":536912,"timer_sdk_decision_eval_ns":691877},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-19T11:51:09Z","timestamp":"2025-06-19T11:51:09.219957677Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-19T11:51:09.6151+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d42f0693-33b4-41b4-baa9-6726c1b6d141","timestampMs":1750333869578,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:09.6152+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:51:09.6153+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d42f0693-33b4-41b4-baa9-6726c1b6d141","timestampMs":1750333869578,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-19T11:51:09.6154+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-19T11:51:09.6154+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-19T11:51:09.6154+00:00] Deleting Policy from OPA : /zoneB policy-opa-pdp | DEBU[2025-06-19T11:51:09.6192+00:00] Removing policy directory: /opt/policies/zoneB policy-opa-pdp | DEBU[2025-06-19T11:51:09.6195+00:00] Deleting data from OPA : /node/zoneB policy-opa-pdp | DEBU[2025-06-19T11:51:09.6195+00:00] Analyzing dataPath: /node/zoneB policy-opa-pdp | DEBU[2025-06-19T11:51:09.6195+00:00] Path segments: [ node zoneB] policy-opa-pdp | DEBU[2025-06-19T11:51:09.6195+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB policy-opa-pdp | DEBU[2025-06-19T11:51:09.6196+00:00] Removing data directory: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-19T11:51:09.6199+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:09.6200+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:51:09.6200+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-19T11:51:09.6201+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:51:09 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:51:09.6202+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d42f0693-33b4-41b4-baa9-6726c1b6d141","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"189be6fa-9530-46cd-869d-a0a2352a062f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333869620","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:51:09.6203+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:51:09.6203+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:51:09.6283+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d42f0693-33b4-41b4-baa9-6726c1b6d141","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"189be6fa-9530-46cd-869d-a0a2352a062f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333869620","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:09.6283+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:09.6284+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:10.8454+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a57f1de8-e764-48c3-ad34-5223c690d942","timestampMs":1750333870814,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:10.8455+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:51:10.8457+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a57f1de8-e764-48c3-ad34-5223c690d942","timestampMs":1750333870814,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:10.8457+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:51:10.8458+00:00] Policy is new and should be deployed: vehicle 1.0.6 policy-opa-pdp | DEBU[2025-06-19T11:51:10.8458+00:00] Policy Is Allowed: vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:10.8458+00:00] Validating properties data for policy: vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:10.8458+00:00] Validating properties policy for policy: vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8458+00:00] Validation successful for policy: vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8460+00:00] Directory created: /opt/policies/vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8462+00:00] Policy file saved: /opt/policies/vehicle/policy.rego policy-opa-pdp | INFO[2025-06-19T11:51:10.8463+00:00] Directory created: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8463+00:00] Data file saved: /opt/data/node/vehicle/data.json policy-opa-pdp | DEBU[2025-06-19T11:51:10.8464+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-19T11:51:10.8675+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-19T11:51:10.8734+00:00] storage not found creating : /node/vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8735+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "vehicle", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:10.8737+00:00] Loaded Policy: vehicle policy-opa-pdp | INFO[2025-06-19T11:51:10.8737+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-19T11:51:10.8738+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:51:10 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:51:10.8740+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a57f1de8-e764-48c3-ad34-5223c690d942","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"06bf80ac-7447-4635-9545-9afccd1c95db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333870873","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:51:10.8741+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:51:10.8741+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:51:10.8836+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a57f1de8-e764-48c3-ad34-5223c690d942","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"06bf80ac-7447-4635-9545-9afccd1c95db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333870873","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:10.8837+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:10.8837+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/19 11:51:31 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:51:31.0726+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"f4056d36-4366-4d1f-b328-5daad6ef3ee3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333891072","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:31.0726+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-19T11:51:31.0810+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"f4056d36-4366-4d1f-b328-5daad6ef3ee3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333891072","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:31.0814+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:31.0814+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-19T11:51:34.9136+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9137+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9137+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-19T11:51:34.9244+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9246+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-19T11:51:34.9247+00:00] data : [map[op:add path:/round value:trail]] policy-opa-pdp | INFO[2025-06-19T11:51:34.9247+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9247+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-19T11:51:34.9247+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-19T11:51:34.9248+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-19T11:51:34.9248+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9249+00:00] path : round policy-opa-pdp | INFO[2025-06-19T11:51:34.9249+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-19T11:51:34.9249+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-19T11:51:34.9249+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-19T11:51:34.9318+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9319+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9320+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-19T11:51:34.9424+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9430+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-19T11:51:34.9432+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] policy-opa-pdp | INFO[2025-06-19T11:51:34.9433+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9436+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-19T11:51:34.9438+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-19T11:51:34.9440+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-19T11:51:34.9441+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9443+00:00] path : round policy-opa-pdp | INFO[2025-06-19T11:51:34.9444+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-19T11:51:34.9445+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-19T11:51:34.9447+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-19T11:51:34.9516+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9516+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9517+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-19T11:51:34.9611+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9615+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-19T11:51:34.9617+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-19T11:51:34.9617+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9619+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-19T11:51:34.9621+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-19T11:51:34.9622+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-19T11:51:34.9623+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9624+00:00] path : round policy-opa-pdp | INFO[2025-06-19T11:51:34.9625+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-19T11:51:34.9627+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-19T11:51:34.9628+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-19T11:51:34.9693+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:34.9694+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:34.9697+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | DEBU[2025-06-19T11:51:34.9789+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:34.9790+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:34.9793+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:51:34.9795+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"bdfd9370-b7ee-49b1-81aa-0b5cb65139e2","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":650,"timer_rego_query_compile_ns":134053,"timer_rego_query_eval_ns":414159,"timer_rego_query_parse_ns":110933,"timer_sdk_decision_eval_ns":839940},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T11:51:34Z","timestamp":"2025-06-19T11:51:34.979688495Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-19T11:51:34.9811+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "bdfd9370-b7ee-49b1-81aa-0b5cb65139e2", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:34.9884+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:34.9884+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:34.9888+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-19T11:51:34.9888+00:00] Policy Name vehile does not exist policy-opa-pdp | DEBU[2025-06-19T11:51:34.9980+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:51:34.9980+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:51:34.9983+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:51:34.9983+00:00] SDK making a decision policy-opa-pdp | DEBU[2025-06-19T11:51:34.9992+00:00] RAW opa Decision output: policy-opa-pdp | {"decision_id":"0fcdb1c8-b57b-406a-98c6-f45c2c090a7b","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":920,"timer_rego_query_eval_ns":402370,"timer_sdk_decision_eval_ns":561563},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-19T11:51:34Z","timestamp":"2025-06-19T11:51:34.998425719Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | { policy-opa-pdp | "ID": "0fcdb1c8-b57b-406a-98c6-f45c2c090a7b", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:35.2645+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"e6228620-ae02-43e1-9414-79c82b8d1bfa","timestampMs":1750333895242,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:35.2646+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:51:35.2650+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"e6228620-ae02-43e1-9414-79c82b8d1bfa","timestampMs":1750333895242,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-19T11:51:35.2651+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-19T11:51:35.2651+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-19T11:51:35.2654+00:00] Deleting Policy from OPA : /vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.2693+00:00] Removing policy directory: /opt/policies/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.2698+00:00] Deleting data from OPA : /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.2698+00:00] Analyzing dataPath: /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.2698+00:00] Path segments: [ node vehicle] policy-opa-pdp | DEBU[2025-06-19T11:51:35.2698+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.2700+00:00] Removing data directory: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-19T11:51:35.2704+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:35.2704+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:51:35.2705+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-19T11:51:35.2708+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:51:35 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:51:35.2710+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"e6228620-ae02-43e1-9414-79c82b8d1bfa","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1d54935a-e758-4037-bbea-57928eb5e187","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333895270","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:51:35.2711+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:51:35.2712+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:51:35.2776+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"e6228620-ae02-43e1-9414-79c82b8d1bfa","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1d54935a-e758-4037-bbea-57928eb5e187","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333895270","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:35.2776+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:35.2777+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-19T11:51:35.6655+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:51:35.6655+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | WARN[2025-06-19T11:51:35.6655+00:00] Error in reading data under /node/vehicle path policy-opa-pdp | ERRO[2025-06-19T11:51:35.6656+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist policy-opa-pdp | INFO[2025-06-19T11:51:35.6767+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-19T11:51:35.6771+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-19T11:51:35.6771+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-19T11:51:35.6771+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-19T11:51:35.6771+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] policy-opa-pdp | ERRO[2025-06-19T11:51:35.6772+00:00] Policy associated with the patch request does not exists policy-opa-pdp | DEBU[2025-06-19T11:51:36.4236+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","timestampMs":1750333896404,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:36.4239+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:51:36.4241+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","timestampMs":1750333896404,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:51:36.4241+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:51:36.4242+00:00] Policy is new and should be deployed: abac 1.0.7 policy-opa-pdp | DEBU[2025-06-19T11:51:36.4242+00:00] Policy Is Allowed: abac policy-opa-pdp | DEBU[2025-06-19T11:51:36.4242+00:00] Validating properties data for policy: abac policy-opa-pdp | DEBU[2025-06-19T11:51:36.4242+00:00] Validating properties policy for policy: abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4243+00:00] Validation successful for policy: abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4244+00:00] Directory created: /opt/policies/abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4245+00:00] Policy file saved: /opt/policies/abac/policy.rego policy-opa-pdp | INFO[2025-06-19T11:51:36.4246+00:00] Directory created: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4246+00:00] Data file saved: /opt/data/node/abac/data.json policy-opa-pdp | DEBU[2025-06-19T11:51:36.4246+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-19T11:51:36.4421+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-19T11:51:36.4462+00:00] storage not found creating : /node/abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4464+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.abac" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "abac" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "abac", policy-opa-pdp | "policy-version": "1.0.7" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:51:36.4464+00:00] Loaded Policy: abac policy-opa-pdp | INFO[2025-06-19T11:51:36.4464+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-19T11:51:36.4465+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-19T11:51:36.4465+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | 2025/06/19 11:51:36 KafkaProducer or producer produce message policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"37cbf9e1-553c-446f-ab3b-d624583c2c29","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333896446","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:51:36.4465+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:51:36.4466+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:51:36.4542+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"37cbf9e1-553c-446f-ab3b-d624583c2c29","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333896446","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:51:36.4542+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:51:36.4543+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-19T11:52:00.5012+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-19T11:52:00.5015+00:00] datapath to get Data : /node/abac policy-opa-pdp | DEBU[2025-06-19T11:52:00.5017+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} policy-opa-pdp | DEBU[2025-06-19T11:52:00.5131+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:52:00.5132+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:52:00.5137+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:52:00.5139+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"23e092a6-53da-44cc-ae72-cc53ecfee4f7","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":700,"timer_rego_query_compile_ns":152184,"timer_rego_query_eval_ns":782678,"timer_rego_query_parse_ns":117182,"timer_sdk_decision_eval_ns":1391123},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T11:52:00Z","timestamp":"2025-06-19T11:52:00.51416823Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-19T11:52:00.5162+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "23e092a6-53da-44cc-ae72-cc53ecfee4f7", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:52:00.5232+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:52:00.5233+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:52:00.5236+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-19T11:52:00.5237+00:00] Policy Name abc does not exist policy-opa-pdp | DEBU[2025-06-19T11:52:00.5299+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-19T11:52:00.5302+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-19T11:52:00.5307+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-19T11:52:00.5309+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"26a84872-c70b-4a33-9035-70e949b66281","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"031ba6bf-203e-465d-8788-838a4c13f9ca","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1080,"timer_rego_query_eval_ns":1023194,"timer_sdk_decision_eval_ns":1261820},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-19T11:52:00Z","timestamp":"2025-06-19T11:52:00.531114952Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-19T11:52:00.5330+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "26a84872-c70b-4a33-9035-70e949b66281", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:52:01.1128+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"54a9fb81-6a73-441e-9488-4b382e00919f","timestampMs":1750333921087,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-19T11:52:01.1130+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-19T11:52:01.1131+00:00] PDP_UPDATE Message received: {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"54a9fb81-6a73-441e-9488-4b382e00919f","timestampMs":1750333921087,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-19T11:52:01.1132+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-19T11:52:01.1132+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment policy-opa-pdp | DEBU[2025-06-19T11:52:01.1132+00:00] Deleting Policy from OPA : /abac policy-opa-pdp | DEBU[2025-06-19T11:52:01.1160+00:00] Removing policy directory: /opt/policies/abac policy-opa-pdp | DEBU[2025-06-19T11:52:01.1163+00:00] Deleting data from OPA : /node/abac policy-opa-pdp | DEBU[2025-06-19T11:52:01.1163+00:00] Analyzing dataPath: /node/abac policy-opa-pdp | DEBU[2025-06-19T11:52:01.1163+00:00] Path segments: [ node abac] policy-opa-pdp | DEBU[2025-06-19T11:52:01.1163+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac policy-opa-pdp | DEBU[2025-06-19T11:52:01.1163+00:00] Removing data directory: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-19T11:52:01.1165+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-19T11:52:01.1165+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-19T11:52:01.1166+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-19T11:52:01.1168+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/19 11:52:01 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-19T11:52:01.1169+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"54a9fb81-6a73-441e-9488-4b382e00919f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"da3391ca-f009-4d32-ad1f-47d89f7748b0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333921116","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-19T11:52:01.1169+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-19T11:52:01.1169+00:00] 0 policy-opa-pdp | DEBU[2025-06-19T11:52:01.1278+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"54a9fb81-6a73-441e-9488-4b382e00919f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"da3391ca-f009-4d32-ad1f-47d89f7748b0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333921116","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-19T11:52:01.1279+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-19T11:52:01.1279+00:00] discarding event of type PDP_STATUS policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-19T11:47:25.127+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 63 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-19T11:47:25.128+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-19T11:47:26.670+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-19T11:47:26.764+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 81 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-19T11:47:27.770+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-19T11:47:27.783+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-19T11:47:27.785+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-19T11:47:27.785+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-19T11:47:27.851+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-19T11:47:27.851+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2661 ms policy-pap | [2025-06-19T11:47:28.399+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-19T11:47:28.505+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-19T11:47:28.562+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-19T11:47:29.027+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-19T11:47:29.075+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-19T11:47:29.339+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@1d6a22dd policy-pap | [2025-06-19T11:47:29.342+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-19T11:47:29.459+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-19T11:47:31.507+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-19T11:47:31.511+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-19T11:47:32.826+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 41b86375-7cd4-4a13-9e12-1ee5878a07d0 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T11:47:32.890+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:33.044+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:33.045+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:33.045+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333653043 policy-pap | [2025-06-19T11:47:33.047+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-1, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T11:47:33.048+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T11:47:33.048+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:33.056+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:33.056+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:33.056+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333653056 policy-pap | [2025-06-19T11:47:33.057+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T11:47:33.439+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-19T11:47:33.573+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-19T11:47:33.656+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-19T11:47:33.924+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-19T11:47:34.712+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-19T11:47:34.830+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-19T11:47:34.852+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-19T11:47:34.875+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-19T11:47:34.875+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-19T11:47:34.876+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-19T11:47:34.876+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-19T11:47:34.876+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-19T11:47:34.877+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-19T11:47:34.877+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-19T11:47:34.879+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41b86375-7cd4-4a13-9e12-1ee5878a07d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6e32eea5 policy-pap | [2025-06-19T11:47:34.891+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41b86375-7cd4-4a13-9e12-1ee5878a07d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T11:47:34.891+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 41b86375-7cd4-4a13-9e12-1ee5878a07d0 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T11:47:34.892+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:34.899+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:34.899+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:34.899+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333654899 policy-pap | [2025-06-19T11:47:34.899+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T11:47:34.900+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-19T11:47:34.900+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3849191f-dfa1-4da4-88bd-341769da56cf, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@288c16a5 policy-pap | [2025-06-19T11:47:34.900+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3849191f-dfa1-4da4-88bd-341769da56cf, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T11:47:34.900+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-19T11:47:34.900+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:34.906+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:34.906+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:34.906+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333654906 policy-pap | [2025-06-19T11:47:34.906+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-19T11:47:34.907+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-19T11:47:34.907+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=3849191f-dfa1-4da4-88bd-341769da56cf, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T11:47:34.907+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=41b86375-7cd4-4a13-9e12-1ee5878a07d0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-19T11:47:34.907+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0afbbdab-f2ce-41eb-92e9-4aa5bcc5ab1f, alive=false, publisher=null]]: starting policy-pap | [2025-06-19T11:47:34.920+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-19T11:47:34.921+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:34.934+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-19T11:47:34.951+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:34.951+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:34.951+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333654951 policy-pap | [2025-06-19T11:47:34.952+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0afbbdab-f2ce-41eb-92e9-4aa5bcc5ab1f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-19T11:47:34.952+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a568d6a-53d3-45de-9bd5-3c4b74689c8f, alive=false, publisher=null]]: starting policy-pap | [2025-06-19T11:47:34.953+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-19T11:47:34.963+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-19T11:47:34.964+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750333654969 policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a568d6a-53d3-45de-9bd5-3c4b74689c8f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-19T11:47:34.969+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-19T11:47:34.971+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-19T11:47:34.973+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-19T11:47:34.974+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-19T11:47:34.975+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-19T11:47:34.975+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-19T11:47:34.975+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-19T11:47:34.976+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-19T11:47:34.977+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-19T11:47:34.980+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-19T11:47:34.980+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.705 seconds (process running for 11.316) policy-pap | [2025-06-19T11:47:35.406+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Oxu_XS5tS_KuYOmXHLxK8w policy-pap | [2025-06-19T11:47:35.407+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Oxu_XS5tS_KuYOmXHLxK8w policy-pap | [2025-06-19T11:47:35.412+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-19T11:47:35.413+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Cluster ID: Oxu_XS5tS_KuYOmXHLxK8w policy-pap | [2025-06-19T11:47:35.449+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-19T11:47:35.449+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-19T11:47:35.465+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:35.466+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Oxu_XS5tS_KuYOmXHLxK8w policy-pap | [2025-06-19T11:47:35.583+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:35.601+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:35.815+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:35.852+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:36.262+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:36.301+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:47:37.144+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-19T11:47:37.152+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-19T11:47:37.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-19T11:47:37.157+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] (Re-)joining group policy-pap | [2025-06-19T11:47:37.196+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Request joining group due to: need to re-join with the given member-id: consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03 policy-pap | [2025-06-19T11:47:37.196+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] (Re-)joining group policy-pap | [2025-06-19T11:47:37.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39 policy-pap | [2025-06-19T11:47:37.198+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-19T11:47:40.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03', protocol='range'} policy-pap | [2025-06-19T11:47:40.226+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39', protocol='range'} policy-pap | [2025-06-19T11:47:40.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Finished assignment for group at generation 1: {consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-19T11:47:40.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-19T11:47:40.295+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3-bf1fcdf0-1a79-4f3f-a2f6-b65cb06f1c03', protocol='range'} policy-pap | [2025-06-19T11:47:40.296+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-19T11:47:40.296+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-2a2ebe49-d7e4-4021-8416-81ace17e9a39', protocol='range'} policy-pap | [2025-06-19T11:47:40.296+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-19T11:47:40.298+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-19T11:47:40.299+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-19T11:47:40.317+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-19T11:47:40.318+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-19T11:47:40.338+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-19T11:47:40.338+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-41b86375-7cd4-4a13-9e12-1ee5878a07d0-3, groupId=41b86375-7cd4-4a13-9e12-1ee5878a07d0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-19T11:47:41.615+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-19T11:47:41.615+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-19T11:47:41.617+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | [2025-06-19T11:49:30.339+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-19T11:49:30.340+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"fa6b9d54-3965-426e-ae00-2b7f586b69b8","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750333770286","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:30.340+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"fa6b9d54-3965-426e-ae00-2b7f586b69b8","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750333770286","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:30.349+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T11:49:30.972+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:49:30.972+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:49:30.973+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:49:30.973+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=96472ba6-f998-42de-881b-ca4c9cd1d966, expireMs=1750333800973] policy-pap | [2025-06-19T11:49:30.976+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:49:30.976+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=96472ba6-f998-42de-881b-ca4c9cd1d966, expireMs=1750333800973] policy-pap | [2025-06-19T11:49:30.977+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:49:30.982+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"96472ba6-f998-42de-881b-ca4c9cd1d966","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.041+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"96472ba6-f998-42de-881b-ca4c9cd1d966","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.042+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"96472ba6-f998-42de-881b-ca4c9cd1d966","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.042+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:49:31.043+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:49:31.082+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"96472ba6-f998-42de-881b-ca4c9cd1d966","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1b087bd7-7d5e-4918-ad6a-6a6b5c9e57be","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771067","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.083+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"96472ba6-f998-42de-881b-ca4c9cd1d966","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1b087bd7-7d5e-4918-ad6a-6a6b5c9e57be","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771067","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.083+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 96472ba6-f998-42de-881b-ca4c9cd1d966 policy-pap | [2025-06-19T11:49:31.084+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:49:31.085+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:49:31.085+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:49:31.085+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=96472ba6-f998-42de-881b-ca4c9cd1d966, expireMs=1750333800973] policy-pap | [2025-06-19T11:49:31.085+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:49:31.085+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:49:31.104+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:49:31.105+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-19T11:49:31.105+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 start publishing next request policy-pap | [2025-06-19T11:49:31.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange starting policy-pap | [2025-06-19T11:49:31.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange starting listener policy-pap | [2025-06-19T11:49:31.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange starting timer policy-pap | [2025-06-19T11:49:31.106+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=3e434624-a6b8-448c-a2fa-0b7bb0ae0412, expireMs=1750333801106] policy-pap | [2025-06-19T11:49:31.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange starting enqueue policy-pap | [2025-06-19T11:49:31.106+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange started policy-pap | [2025-06-19T11:49:31.106+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=3e434624-a6b8-448c-a2fa-0b7bb0ae0412, expireMs=1750333801106] policy-pap | [2025-06-19T11:49:31.107+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.125+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.125+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-19T11:49:31.130+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-19T11:49:31.133+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"6b2de463-ec0a-476c-a188-1c4336f6425c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771117","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.133+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3e434624-a6b8-448c-a2fa-0b7bb0ae0412 policy-pap | [2025-06-19T11:49:31.488+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","timestampMs":1750333770945,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.488+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-19T11:49:31.492+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"3e434624-a6b8-448c-a2fa-0b7bb0ae0412","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"6b2de463-ec0a-476c-a188-1c4336f6425c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771117","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange stopping policy-pap | [2025-06-19T11:49:31.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange stopping enqueue policy-pap | [2025-06-19T11:49:31.493+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange stopping timer policy-pap | [2025-06-19T11:49:31.494+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=3e434624-a6b8-448c-a2fa-0b7bb0ae0412, expireMs=1750333801106] policy-pap | [2025-06-19T11:49:31.494+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange stopping listener policy-pap | [2025-06-19T11:49:31.494+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange stopped policy-pap | [2025-06-19T11:49:31.494+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpStateChange successful policy-pap | [2025-06-19T11:49:31.494+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 start publishing next request policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7e112d66-63c2-4a2d-8814-d43f03c87d9a, expireMs=1750333801495] policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:49:31.495+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:49:31.496+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","timestampMs":1750333771479,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.503+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","timestampMs":1750333771479,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.503+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:49:31.504+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","timestampMs":1750333771479,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:49:31.504+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:49:31.511+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"8bfed5cc-d247-4eaa-bdf2-33c4e79a0b14","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771501","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.512+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7e112d66-63c2-4a2d-8814-d43f03c87d9a policy-pap | [2025-06-19T11:49:31.513+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"7e112d66-63c2-4a2d-8814-d43f03c87d9a","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"8bfed5cc-d247-4eaa-bdf2-33c4e79a0b14","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333771501","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:49:31.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:49:31.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:49:31.514+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:49:31.514+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7e112d66-63c2-4a2d-8814-d43f03c87d9a, expireMs=1750333801495] policy-pap | [2025-06-19T11:49:31.514+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:49:31.514+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:49:31.521+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:49:31.521+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:49:34.978+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-19T11:50:00.974+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=96472ba6-f998-42de-881b-ca4c9cd1d966, expireMs=1750333800973] policy-pap | [2025-06-19T11:50:01.107+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=3e434624-a6b8-448c-a2fa-0b7bb0ae0412, expireMs=1750333801106] policy-pap | [2025-06-19T11:50:30.301+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"dd72965c-32f7-4d29-ba7c-196526049050","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333830288","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:50:30.301+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T11:50:30.303+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"dd72965c-32f7-4d29-ba7c-196526049050","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333830288","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:50:44.940+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-19T11:50:44.941+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-8] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-19T11:50:44.942+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] Registering a deploy for policy zoneB 1.0.6 policy-pap | [2025-06-19T11:50:44.942+00:00|INFO|SessionData|http-nio-6969-exec-8] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=1 policy-pap | [2025-06-19T11:50:44.943+00:00|INFO|SessionData|http-nio-6969-exec-8] update cached group opaGroup policy-pap | [2025-06-19T11:50:44.944+00:00|INFO|SessionData|http-nio-6969-exec-8] updating DB group opaGroup policy-pap | [2025-06-19T11:50:44.959+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-8] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T11:50:44Z, user=policyadmin)] policy-pap | [2025-06-19T11:50:44.987+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:50:44.987+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:50:44.987+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:50:44.988+00:00|INFO|TimerManager|http-nio-6969-exec-8] update timer registered Timer [name=ab4b0501-e50d-4de9-a593-4d933a804efd, expireMs=1750333874988] policy-pap | [2025-06-19T11:50:44.988+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:50:44.988+00:00|INFO|ServiceManager|http-nio-6969-exec-8] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:50:44.988+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=ab4b0501-e50d-4de9-a593-4d933a804efd, expireMs=1750333874988] policy-pap | [2025-06-19T11:50:44.988+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ab4b0501-e50d-4de9-a593-4d933a804efd","timestampMs":1750333844942,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:50:44.996+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ab4b0501-e50d-4de9-a593-4d933a804efd","timestampMs":1750333844942,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:50:44.996+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:50:44.999+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ab4b0501-e50d-4de9-a593-4d933a804efd","timestampMs":1750333844942,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:50:45.000+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:50:45.033+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ab4b0501-e50d-4de9-a593-4d933a804efd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"78a2516e-5213-461d-89ff-6d07ea7af794","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333845021","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:50:45.034+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ab4b0501-e50d-4de9-a593-4d933a804efd policy-pap | [2025-06-19T11:50:45.035+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ab4b0501-e50d-4de9-a593-4d933a804efd","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"78a2516e-5213-461d-89ff-6d07ea7af794","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333845021","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ab4b0501-e50d-4de9-a593-4d933a804efd, expireMs=1750333874988] policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:50:45.036+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:50:45.044+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:50:45.044+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:50:45.046+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-19T11:51:09.576+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-19T11:51:09.578+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-19T11:51:09.578+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 policy-pap | [2025-06-19T11:51:09.578+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=0 policy-pap | [2025-06-19T11:51:09.578+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup policy-pap | [2025-06-19T11:51:09.579+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup policy-pap | [2025-06-19T11:51:09.593+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T11:51:09Z, user=policyadmin)] policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=d42f0693-33b4-41b4-baa9-6726c1b6d141, expireMs=1750333899609] policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:51:09.609+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d42f0693-33b4-41b4-baa9-6726c1b6d141","timestampMs":1750333869578,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:09.618+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d42f0693-33b4-41b4-baa9-6726c1b6d141","timestampMs":1750333869578,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:09.618+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:09.619+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"d42f0693-33b4-41b4-baa9-6726c1b6d141","timestampMs":1750333869578,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:09.619+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:09.630+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d42f0693-33b4-41b4-baa9-6726c1b6d141","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"189be6fa-9530-46cd-869d-a0a2352a062f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333869620","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:09.631+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d42f0693-33b4-41b4-baa9-6726c1b6d141 policy-pap | [2025-06-19T11:51:09.633+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d42f0693-33b4-41b4-baa9-6726c1b6d141","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"189be6fa-9530-46cd-869d-a0a2352a062f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333869620","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:09.634+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:51:09.634+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:51:09.634+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:51:09.634+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d42f0693-33b4-41b4-baa9-6726c1b6d141, expireMs=1750333899609] policy-pap | [2025-06-19T11:51:09.635+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:51:09.635+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:51:09.657+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:51:09.657+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:51:09.657+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-19T11:51:10.051+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup policy-pap | [2025-06-19T11:51:10.054+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null policy-pap | [2025-06-19T11:51:10.054+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-1] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy vehicle 1.0.6 policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=1 policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup policy-pap | [2025-06-19T11:51:10.814+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup policy-pap | [2025-06-19T11:51:10.826+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-19T11:51:10Z, user=policyadmin)] policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=a57f1de8-e764-48c3-ad34-5223c690d942, expireMs=1750333900837] policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:51:10.837+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:51:10.838+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a57f1de8-e764-48c3-ad34-5223c690d942","timestampMs":1750333870814,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:10.848+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a57f1de8-e764-48c3-ad34-5223c690d942","timestampMs":1750333870814,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:10.849+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:10.849+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a57f1de8-e764-48c3-ad34-5223c690d942","timestampMs":1750333870814,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:10.850+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:10.885+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a57f1de8-e764-48c3-ad34-5223c690d942","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"06bf80ac-7447-4635-9545-9afccd1c95db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333870873","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:10.887+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a57f1de8-e764-48c3-ad34-5223c690d942 policy-pap | [2025-06-19T11:51:10.887+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a57f1de8-e764-48c3-ad34-5223c690d942","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"06bf80ac-7447-4635-9545-9afccd1c95db","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333870873","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a57f1de8-e764-48c3-ad34-5223c690d942, expireMs=1750333900837] policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:51:10.888+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:51:10.900+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:51:10.900+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:51:10.900+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-19T11:51:14.988+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=ab4b0501-e50d-4de9-a593-4d933a804efd, expireMs=1750333874988] policy-pap | [2025-06-19T11:51:31.085+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"f4056d36-4366-4d1f-b328-5daad6ef3ee3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333891072","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:31.087+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"f4056d36-4366-4d1f-b328-5daad6ef3ee3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333891072","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:31.088+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-19T11:51:34.991+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=0 policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup policy-pap | [2025-06-19T11:51:35.242+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup policy-pap | [2025-06-19T11:51:35.249+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-19T11:51:35Z, user=policyadmin)] policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=e6228620-ae02-43e1-9414-79c82b8d1bfa, expireMs=1750333925257] policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=e6228620-ae02-43e1-9414-79c82b8d1bfa, expireMs=1750333925257] policy-pap | [2025-06-19T11:51:35.257+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"e6228620-ae02-43e1-9414-79c82b8d1bfa","timestampMs":1750333895242,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:35.265+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"e6228620-ae02-43e1-9414-79c82b8d1bfa","timestampMs":1750333895242,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:35.265+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:35.275+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"e6228620-ae02-43e1-9414-79c82b8d1bfa","timestampMs":1750333895242,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:35.276+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:35.280+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"e6228620-ae02-43e1-9414-79c82b8d1bfa","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1d54935a-e758-4037-bbea-57928eb5e187","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333895270","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:35.280+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"e6228620-ae02-43e1-9414-79c82b8d1bfa","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"1d54935a-e758-4037-bbea-57928eb5e187","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333895270","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:35.280+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e6228620-ae02-43e1-9414-79c82b8d1bfa policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e6228620-ae02-43e1-9414-79c82b8d1bfa, expireMs=1750333925257] policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:51:35.281+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:51:35.288+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:51:35.289+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-19T11:51:35.289+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:51:35.652+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup policy-pap | [2025-06-19T11:51:35.653+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-3] failed to undeploy policy: vehicle null policy-pap | [2025-06-19T11:51:35.653+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-3] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-19T11:51:36.403+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup policy-pap | [2025-06-19T11:51:36.404+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 policy-pap | [2025-06-19T11:51:36.404+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 policy-pap | [2025-06-19T11:51:36.404+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=1 policy-pap | [2025-06-19T11:51:36.404+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup policy-pap | [2025-06-19T11:51:36.404+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup policy-pap | [2025-06-19T11:51:36.411+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-19T11:51:36Z, user=policyadmin)] policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=b00d126f-96f6-4ad4-b9e2-48d584e0fb25, expireMs=1750333926419] policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:51:36.419+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:51:36.420+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","timestampMs":1750333896404,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:36.428+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","timestampMs":1750333896404,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:36.429+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:36.430+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","timestampMs":1750333896404,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:51:36.431+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:51:36.459+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"37cbf9e1-553c-446f-ab3b-d624583c2c29","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333896446","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:36.460+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id b00d126f-96f6-4ad4-b9e2-48d584e0fb25 policy-pap | [2025-06-19T11:51:36.461+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"b00d126f-96f6-4ad4-b9e2-48d584e0fb25","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"37cbf9e1-553c-446f-ab3b-d624583c2c29","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333896446","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:51:36.462+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:51:36.462+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:51:36.462+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:51:36.462+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=b00d126f-96f6-4ad4-b9e2-48d584e0fb25, expireMs=1750333926419] policy-pap | [2025-06-19T11:51:36.463+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:51:36.463+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:51:36.471+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:51:36.471+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:51:36.471+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy abac 1.0.7 policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 opaGroup opa policies=0 policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup policy-pap | [2025-06-19T11:52:01.087+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup policy-pap | [2025-06-19T11:52:01.096+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-19T11:52:01Z, user=policyadmin)] policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting listener policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting timer policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=54a9fb81-6a73-441e-9488-4b382e00919f, expireMs=1750333951107] policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate starting enqueue policy-pap | [2025-06-19T11:52:01.107+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate started policy-pap | [2025-06-19T11:52:01.108+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"54a9fb81-6a73-441e-9488-4b382e00919f","timestampMs":1750333921087,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:52:01.116+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"54a9fb81-6a73-441e-9488-4b382e00919f","timestampMs":1750333921087,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:52:01.116+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:52:01.121+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-d6a51d0f-c020-475d-99df-591f67ce6007","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"54a9fb81-6a73-441e-9488-4b382e00919f","timestampMs":1750333921087,"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-19T11:52:01.122+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"54a9fb81-6a73-441e-9488-4b382e00919f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"da3391ca-f009-4d32-ad1f-47d89f7748b0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333921116","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping enqueue policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping timer policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=54a9fb81-6a73-441e-9488-4b382e00919f, expireMs=1750333951107] policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopping listener policy-pap | [2025-06-19T11:52:01.130+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate stopped policy-pap | [2025-06-19T11:52:01.131+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"54a9fb81-6a73-441e-9488-4b382e00919f","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40","requestId":"da3391ca-f009-4d32-ad1f-47d89f7748b0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750333921116","deploymentInstanceInfo":""} policy-pap | [2025-06-19T11:52:01.132+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 54a9fb81-6a73-441e-9488-4b382e00919f policy-pap | [2025-06-19T11:52:01.142+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 PdpUpdate successful policy-pap | [2025-06-19T11:52:01.142+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7dfb13bc-4242-4f33-a0d6-be4d3308bc40 has no more requests policy-pap | [2025-06-19T11:52:01.142+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-19T11:52:01.507+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-19T11:52:01.508+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: abac null policy-pap | [2025-06-19T11:52:01.508+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-19T11:52:05.257+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=e6228620-ae02-43e1-9414-79c82b8d1bfa, expireMs=1750333925257] postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | waiting for server to start....2025-06-19 11:46:55.345 UTC [48] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-19 11:46:55.358 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-19 11:46:55.369 UTC [51] LOG: database system was shut down at 2025-06-19 11:46:54 UTC postgres | 2025-06-19 11:46:55.374 UTC [48] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | 2025-06-19 11:46:56.770 UTC [48] LOG: received fast shutdown request postgres | waiting for server to shut down....2025-06-19 11:46:56.772 UTC [48] LOG: aborting any active transactions postgres | 2025-06-19 11:46:56.775 UTC [48] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1 postgres | 2025-06-19 11:46:56.775 UTC [49] LOG: shutting down postgres | 2025-06-19 11:46:56.777 UTC [49] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-19 11:46:57.398 UTC [49] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.419 s, sync=0.193 s, total=0.623 s; sync files=1788, longest=0.015 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-19 11:46:57.412 UTC [48] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-19 11:46:57.497 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-19 11:46:57.497 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-19 11:46:57.498 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-19 11:46:57.505 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-19 11:46:57.515 UTC [101] LOG: database system was shut down at 2025-06-19 11:46:57 UTC postgres | 2025-06-19 11:46:57.522 UTC [1] LOG: database system is ready to accept connections postgres | 2025-06-19 11:51:57.587 UTC [99] LOG: checkpoint starting: time postgres | 2025-06-19 11:53:02.429 UTC [99] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.805 s, sync=0.025 s, total=64.842 s; sync files=515, longest=0.003 s, average=0.001 s; distance=3535 kB, estimate=3535 kB; lsn=0/3150318, redo lsn=0/314DE18 prometheus | time=2025-06-19T11:46:51.998Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-19T11:46:51.999Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-19T11:46:51.999Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-19T11:46:52.000Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-19T11:46:52.002Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-19T11:46:52.003Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-19T11:46:52.009Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-19T11:46:52.009Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-19T11:46:52.012Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-19T11:46:52.012Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.56µs prometheus | time=2025-06-19T11:46:52.012Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-19T11:46:52.012Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=260.307µs prometheus | time=2025-06-19T11:46:52.012Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=19.481µs wal_replay_duration=279.398µs wbl_replay_duration=170ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.56µs total_replay_duration=341.21µs prometheus | time=2025-06-19T11:46:52.014Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-19T11:46:52.014Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-19T11:46:52.014Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-19T11:46:52.016Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-19T11:46:52.016Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=2.36µs remote_storage=3.07µs web_handler=990ns query_engine=1.97µs scrape=348.718µs scrape_sd=352.839µs notify=146.884µs notify_sd=18.21µs rules=1.58µs tracing=5.06µs filename=/etc/prometheus/prometheus.yml totalDuration=1.657101ms prometheus | time=2025-06-19T11:46:52.016Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-19T11:46:52.016Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-19 11:46:54,176] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,179] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,179] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,179] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,179] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,184] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 11:46:54,184] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 11:46:54,184] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-19 11:46:54,184] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-19 11:46:54,187] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-19 11:46:54,188] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,189] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,189] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,189] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,189] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-19 11:46:54,189] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-19 11:46:54,205] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-19 11:46:54,207] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-19 11:46:54,208] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-19 11:46:54,210] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 11:46:54,218] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,218] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,219] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,220] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,221] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-19 11:46:54,222] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,222] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,229] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-19 11:46:54,229] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-19 11:46:54,229] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,230] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,230] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,230] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,230] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,230] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-19 11:46:54,232] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,232] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,232] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-19 11:46:54,232] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-19 11:46:54,232] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,254] INFO Logging initialized @462ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-19 11:46:54,337] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 11:46:54,337] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 11:46:54,361] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-19 11:46:54,414] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 11:46:54,415] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 11:46:54,416] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-19 11:46:54,424] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-19 11:46:54,437] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-19 11:46:54,449] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-19 11:46:54,449] INFO Started @662ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-19 11:46:54,450] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-19 11:46:54,455] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-19 11:46:54,455] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-19 11:46:54,457] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-19 11:46:54,458] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-19 11:46:54,475] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-19 11:46:54,475] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-19 11:46:54,475] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 11:46:54,475] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 11:46:54,480] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-19 11:46:54,480] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 11:46:54,483] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-19 11:46:54,484] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-19 11:46:54,485] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-19 11:46:54,505] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-19 11:46:54,506] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-19 11:46:54,519] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-19 11:46:54,519] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-19 11:46:55,643] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container policy-opa-pdp Stopping Container grafana Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-opa-pdp Stopped Container policy-opa-pdp Removing Container policy-opa-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container kafka Stopping Container policy-api Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2154 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins15565009338730354742.sh ---> sysstat.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins4533389018106071824.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins14383983432253906839.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T0yX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-T0yX/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins18403900700010911909.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/config15755777180109236954tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins5710928329550737182.sh ---> create-netrc.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins5436253457108186371.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T0yX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-T0yX/bin to PATH [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2236273901154077223.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins16141778254527985055.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T0yX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-T0yX/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash -l /tmp/jenkins469190935885173665.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T0yX from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-T0yX/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-policy-opa-pdp/182 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-22339 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 864 24070 0 7232 30847 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:e2:1d:96 brd ff:ff:ff:ff:ff:ff inet 10.30.106.100/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85808sec preferred_lft 85808sec inet6 fe80::f816:3eff:fee2:1d96/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:51:d1:0a:d9 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:51ff:fed1:ad9/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22339) 06/19/25 _x86_64_ (8 CPU) 11:44:33 LINUX RESTART (8 CPU) 11:45:01 tps rtps wtps bread/s bwrtn/s 11:46:01 323.93 57.51 266.42 3618.20 73661.06 11:47:01 730.94 22.66 708.28 2672.22 250655.02 11:48:01 53.49 0.03 53.46 0.27 21028.23 11:49:01 5.43 0.00 5.43 0.00 123.05 11:50:01 17.01 0.12 16.90 14.40 2086.59 11:51:01 205.83 0.25 205.58 16.53 31890.84 11:52:01 9.42 0.00 9.42 0.00 209.70 11:53:01 13.76 0.02 13.75 0.13 329.01 11:54:01 64.74 1.28 63.46 104.12 2157.91 Average: 158.29 9.10 149.19 713.97 42459.96 11:45:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:46:01 30092652 31677100 2846560 8.64 71540 1823320 1433836 4.22 876188 1677536 180832 11:47:01 24553116 31065636 8386096 25.46 161176 6439568 6192332 18.22 1721484 6032888 13076 11:48:01 23437540 30100148 9501672 28.85 163472 6592272 7343592 21.61 2766916 6087504 448 11:49:01 23426928 30078240 9512284 28.88 163616 6581288 7601832 22.37 2789284 6075436 168 11:50:01 23180784 30033844 9758428 29.63 176108 6749384 7816744 23.00 2856560 6229320 122672 11:51:01 22725808 29917380 10213404 31.01 204588 7027788 7960604 23.42 3064564 6438556 2124 11:52:01 22701880 29894648 10237332 31.08 204712 7028588 7964436 23.43 3094372 6432956 636 11:53:01 22742020 29934520 10197192 30.96 204832 7029136 7929516 23.33 3058192 6431836 248 11:54:01 24709520 31646840 8229692 24.98 206204 6767632 1562588 4.60 1405888 6189332 10804 Average: 24174472 30483151 8764740 26.61 172916 6226553 6200609 18.24 2403716 5732818 36779 11:45:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:46:01 lo 1.53 1.53 0.18 0.18 0.00 0.00 0.00 0.00 11:46:01 ens3 518.41 353.72 1838.77 83.46 0.00 0.00 0.00 0.00 11:47:01 vethc88f359 0.23 0.40 0.01 0.02 0.00 0.00 0.00 0.00 11:47:01 vetha851b41 0.17 0.30 0.01 0.02 0.00 0.00 0.00 0.00 11:47:01 veth938279b 11.93 18.56 0.99 62.78 0.00 0.00 0.00 0.01 11:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:48:01 vethc88f359 91.72 91.32 16.02 18.61 0.00 0.00 0.00 0.00 11:48:01 vetha851b41 7.37 7.55 1.38 0.81 0.00 0.00 0.00 0.00 11:48:01 veth938279b 29.88 41.84 2.30 246.76 0.00 0.00 0.00 0.02 11:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:49:01 vethc88f359 0.17 0.18 0.54 0.02 0.00 0.00 0.00 0.00 11:49:01 vetha851b41 6.30 9.20 1.43 0.71 0.00 0.00 0.00 0.00 11:49:01 veth938279b 0.43 0.27 0.03 0.02 0.00 0.00 0.00 0.00 11:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:50:01 vethc88f359 0.18 0.22 0.55 0.02 0.00 0.00 0.00 0.00 11:50:01 vetha851b41 106.50 109.08 12.91 25.75 0.00 0.00 0.00 0.00 11:50:01 veth938279b 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 11:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:01 vethc88f359 33.31 33.22 4.28 8.22 0.00 0.00 0.00 0.00 11:51:01 vetha851b41 140.19 142.64 16.12 33.64 0.00 0.00 0.00 0.00 11:51:01 veth938279b 0.00 0.07 0.00 0.00 0.00 0.00 0.00 0.00 11:51:01 docker0 113.50 155.88 7.51 1346.29 0.00 0.00 0.00 0.00 11:52:01 vethc88f359 103.42 103.00 12.05 25.12 0.00 0.00 0.00 0.00 11:52:01 vetha851b41 592.75 593.38 64.75 142.54 0.00 0.00 0.00 0.01 11:52:01 veth938279b 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:53:01 vethc88f359 0.33 0.35 0.58 0.03 0.00 0.00 0.00 0.00 11:53:01 vetha851b41 6.65 9.57 1.56 0.74 0.00 0.00 0.00 0.00 11:53:01 veth938279b 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:54:01 lo 30.59 30.59 2.70 2.70 0.00 0.00 0.00 0.00 11:54:01 ens3 1951.56 1228.51 37393.17 188.70 0.00 0.00 0.00 0.00 Average: docker0 12.61 17.32 0.83 149.61 0.00 0.00 0.00 0.00 Average: lo 2.99 2.99 0.27 0.27 0.00 0.00 0.00 0.00 Average: ens3 214.63 135.04 4147.22 20.80 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22339) 06/19/25 _x86_64_ (8 CPU) 11:44:33 LINUX RESTART (8 CPU) 11:45:01 CPU %user %nice %system %iowait %steal %idle 11:46:01 all 11.30 0.00 1.10 2.75 0.05 84.79 11:46:01 0 10.23 0.00 0.78 0.56 0.03 88.39 11:46:01 1 11.35 0.00 0.85 0.83 0.03 86.93 11:46:01 2 14.28 0.00 1.85 7.82 0.07 75.98 11:46:01 3 12.40 0.00 0.94 3.76 0.05 82.85 11:46:01 4 18.72 0.00 1.42 0.60 0.03 79.23 11:46:01 5 11.34 0.00 1.92 3.94 0.05 82.75 11:46:01 6 4.42 0.00 0.49 4.30 0.10 90.70 11:46:01 7 7.66 0.00 0.55 0.23 0.03 91.52 11:47:01 all 20.57 0.00 9.04 9.45 0.10 60.84 11:47:01 0 17.65 0.00 8.50 13.93 0.10 59.82 11:47:01 1 19.48 0.00 9.44 8.94 0.10 62.04 11:47:01 2 28.61 0.00 10.24 23.50 0.12 37.53 11:47:01 3 19.80 0.00 9.25 8.35 0.10 62.50 11:47:01 4 18.94 0.00 8.65 5.48 0.10 66.83 11:47:01 5 21.36 0.00 8.66 3.53 0.08 66.37 11:47:01 6 19.80 0.00 9.07 9.42 0.10 61.61 11:47:01 7 18.98 0.00 8.53 2.52 0.10 69.87 11:48:01 all 22.31 0.00 2.58 0.60 0.08 74.44 11:48:01 0 12.84 0.00 1.76 0.08 0.07 85.25 11:48:01 1 21.51 0.00 2.81 0.10 0.07 75.51 11:48:01 2 25.26 0.00 3.20 0.80 0.07 70.67 11:48:01 3 22.00 0.00 2.38 0.20 0.07 75.36 11:48:01 4 16.82 0.00 1.97 0.89 0.07 80.25 11:48:01 5 22.13 0.00 2.53 2.25 0.08 73.00 11:48:01 6 26.44 0.00 2.96 0.22 0.08 70.30 11:48:01 7 31.41 0.00 3.05 0.22 0.08 65.24 11:49:01 all 0.85 0.00 0.16 0.04 0.05 98.91 11:49:01 0 1.60 0.00 0.17 0.00 0.07 98.17 11:49:01 1 0.68 0.00 0.10 0.00 0.05 99.17 11:49:01 2 0.32 0.00 0.10 0.00 0.03 99.55 11:49:01 3 1.10 0.00 0.17 0.17 0.05 98.51 11:49:01 4 0.52 0.00 0.17 0.00 0.03 99.28 11:49:01 5 0.99 0.00 0.22 0.02 0.07 98.71 11:49:01 6 0.95 0.00 0.15 0.08 0.05 98.76 11:49:01 7 0.65 0.00 0.22 0.02 0.03 99.08 11:50:01 all 3.01 0.00 0.65 0.09 0.05 96.21 11:50:01 0 3.14 0.00 0.48 0.02 0.03 96.33 11:50:01 1 2.32 0.00 0.48 0.03 0.03 97.13 11:50:01 2 2.57 0.00 1.09 0.02 0.05 96.28 11:50:01 3 3.46 0.00 0.47 0.25 0.05 95.77 11:50:01 4 2.57 0.00 0.50 0.08 0.03 96.82 11:50:01 5 2.90 0.00 0.75 0.03 0.05 96.26 11:50:01 6 3.32 0.00 0.70 0.03 0.05 95.89 11:50:01 7 3.75 0.00 0.74 0.30 0.05 95.17 11:51:01 all 8.56 0.00 2.32 1.43 0.07 87.62 11:51:01 0 8.20 0.00 1.79 0.35 0.05 89.61 11:51:01 1 11.46 0.00 2.85 0.45 0.08 85.16 11:51:01 2 4.65 0.00 1.78 0.37 0.08 93.12 11:51:01 3 9.71 0.00 2.67 7.66 0.10 79.86 11:51:01 4 11.37 0.00 2.83 0.35 0.07 85.38 11:51:01 5 8.20 0.00 2.39 0.03 0.05 89.32 11:51:01 6 5.66 0.00 2.30 1.29 0.08 90.67 11:51:01 7 9.22 0.00 1.97 0.92 0.07 87.82 11:52:01 all 3.90 0.00 0.68 0.04 0.05 95.32 11:52:01 0 5.19 0.00 0.70 0.08 0.05 93.98 11:52:01 1 3.87 0.00 0.65 0.05 0.07 95.36 11:52:01 2 2.55 0.00 0.91 0.07 0.05 96.42 11:52:01 3 4.34 0.00 0.50 0.02 0.03 95.11 11:52:01 4 5.41 0.00 0.50 0.00 0.03 94.05 11:52:01 5 3.13 0.00 1.07 0.03 0.05 95.72 11:52:01 6 3.63 0.00 0.53 0.07 0.05 95.72 11:52:01 7 3.06 0.00 0.57 0.00 0.05 96.33 11:53:01 all 0.66 0.00 0.17 0.04 0.05 99.08 11:53:01 0 0.47 0.00 0.18 0.17 0.05 99.13 11:53:01 1 0.35 0.00 0.18 0.08 0.07 99.31 11:53:01 2 1.11 0.00 0.10 0.00 0.03 98.76 11:53:01 3 0.53 0.00 0.20 0.00 0.07 99.20 11:53:01 4 0.90 0.00 0.12 0.02 0.03 98.93 11:53:01 5 0.52 0.00 0.17 0.03 0.05 99.23 11:53:01 6 0.67 0.00 0.25 0.05 0.08 98.95 11:53:01 7 0.80 0.00 0.15 0.02 0.05 98.98 11:54:01 all 5.64 0.00 0.98 0.22 0.04 93.12 11:54:01 0 15.34 0.00 1.39 0.17 0.05 83.05 11:54:01 1 4.15 0.00 0.97 0.10 0.03 94.75 11:54:01 2 1.83 0.00 0.75 0.07 0.03 97.32 11:54:01 3 14.47 0.00 0.85 0.10 0.03 84.54 11:54:01 4 1.67 0.00 0.77 0.07 0.02 97.48 11:54:01 5 1.38 0.00 0.98 1.08 0.03 96.51 11:54:01 6 4.73 0.00 1.39 0.05 0.05 93.78 11:54:01 7 1.54 0.00 0.74 0.08 0.03 97.61 Average: all 8.51 0.00 1.95 1.62 0.06 87.86 Average: 0 8.27 0.00 1.74 1.69 0.06 88.25 Average: 1 8.34 0.00 2.03 1.17 0.06 88.41 Average: 2 8.99 0.00 2.21 3.60 0.06 85.14 Average: 3 9.74 0.00 1.93 2.27 0.06 86.01 Average: 4 8.53 0.00 1.87 0.83 0.05 88.73 Average: 5 7.97 0.00 2.07 1.21 0.06 88.69 Average: 6 7.71 0.00 1.97 1.71 0.07 88.53 Average: 7 8.54 0.00 1.82 0.48 0.06 89.11