Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21584 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-5pN1hVrKcM5n/agent.2073 SSH_AGENT_PID=2075 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_6327988617652414941.key (/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/private_key_6327988617652414941.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 Commit message: "Add Fix fail handling in ACM runtime in CSIT" > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins1431226351649644145.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-QaKm lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH Generating Requirements File Python 3.10.6 pip 25.1.1 from /tmp/venv-QaKm/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.38.36 botocore==1.38.36 bs4==0.0.2 cachetools==5.5.2 certifi==2025.6.15 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.2 click==8.2.1 cliff==4.10.0 cmd2==2.6.1 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.3.9 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.44 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.12 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.24.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==5.4.0 MarkupSafe==3.0.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==4.6.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==4.0.2 oslo.config==9.8.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.1.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==6.1.1 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.6.1 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.2.0 python-jenkins==1.8.2 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.25.1 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.0 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.0 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.31.2 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.2 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh /tmp/jenkins6357084058631707015.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/sh -xe /tmp/jenkins11227102259556747411.sh + /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/run-project-csit.sh policy-opa-pdp WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded docker: 'compose' is not a docker command. See 'docker --help' Docker Compose Plugin not installed. Installing now... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 71 60.2M 71 43.2M 0 0 64.9M 0 --:--:-- --:--:-- --:--:-- 64.9M 100 60.2M 100 60.2M 0 0 72.8M 0 --:--:-- --:--:-- --:--:-- 106M Setting project configuration for: policy-opa-pdp Configuring docker compose... Starting opa-pdp using postgres + Grafana/Prometheus prometheus Pulling postgres Pulling kafka Pulling opa-pdp Pulling zookeeper Pulling pap Pulling api Pulling grafana Pulling policy-db-migrator Pulling da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer 5e06c6bed798 Pulling fs layer 684be6598fc9 Pulling fs layer 0d92cad902ba Pulling fs layer dcc0c3b2850c Pulling fs layer eb7cda286a15 Pulling fs layer 0d92cad902ba Waiting dcc0c3b2850c Waiting eb7cda286a15 Waiting 684be6598fc9 Waiting da9db072f522 Downloading [> ] 48.06kB/3.624MB 5e06c6bed798 Downloading [==================================================>] 296B/296B 5e06c6bed798 Verifying Checksum 5e06c6bed798 Download complete eca0188f477e Pulling fs layer e444bcd4d577 Pulling fs layer eabd8714fec9 Pulling fs layer 45fd2fec8a19 Pulling fs layer 8f10199ed94b Pulling fs layer f963a77d2726 Pulling fs layer f3a82e9f1761 Pulling fs layer 79161a3f5362 Pulling fs layer 9c266ba63f51 Pulling fs layer 2e8a7df9c2ee Pulling fs layer e444bcd4d577 Waiting 10f05dd8b1db Pulling fs layer 41dac8b43ba6 Pulling fs layer eca0188f477e Waiting 71a9f6a9ab4d Pulling fs layer da3ed5db7103 Pulling fs layer eabd8714fec9 Waiting 2e8a7df9c2ee Waiting c955f6e31a04 Pulling fs layer 45fd2fec8a19 Waiting 10f05dd8b1db Waiting da3ed5db7103 Waiting 41dac8b43ba6 Waiting 8f10199ed94b Waiting 71a9f6a9ab4d Waiting c955f6e31a04 Waiting f963a77d2726 Waiting 79161a3f5362 Waiting f3a82e9f1761 Waiting 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 684be6598fc9 Verifying Checksum 684be6598fc9 Download complete da9db072f522 Pulling fs layer 19ede2622bd6 Pulling fs layer 81f92f6326a0 Pulling fs layer 774184111a51 Pulling fs layer ba3bfa42d232 Pulling fs layer 8e7191d1a9d6 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 43449fa9f0bf Pulling fs layer 81f92f6326a0 Waiting 25fd4437207e Pulling fs layer 774184111a51 Waiting 19ede2622bd6 Waiting ba3bfa42d232 Waiting 43449fa9f0bf Waiting 25fd4437207e Waiting 8e7191d1a9d6 Waiting da9db072f522 Pulling fs layer 96e38c8865ba Pulling fs layer e5d7009d9e55 Pulling fs layer 1ec5fb03eaee Pulling fs layer d3165a332ae3 Pulling fs layer c124ba1a8b26 Pulling fs layer 6394804c2196 Pulling fs layer da9db072f522 Downloading [> ] 48.06kB/3.624MB 96e38c8865ba Downloading [> ] 539.6kB/71.91MB e5d7009d9e55 Waiting 1ec5fb03eaee Waiting d3165a332ae3 Waiting c124ba1a8b26 Waiting 6394804c2196 Waiting f18232174bc9 Pulling fs layer e60d9caeb0b8 Pulling fs layer f61a19743345 Pulling fs layer f18232174bc9 Waiting 8af57d8c9f49 Pulling fs layer e60d9caeb0b8 Waiting f61a19743345 Waiting c53a11b7c6fc Pulling fs layer e032d0a5e409 Pulling fs layer 8af57d8c9f49 Waiting c49e0ee60bfb Pulling fs layer 384497dbce3b Pulling fs layer 055b9255fa03 Pulling fs layer c53a11b7c6fc Waiting b176d7edde70 Pulling fs layer 384497dbce3b Waiting e032d0a5e409 Waiting b176d7edde70 Waiting c49e0ee60bfb Waiting 055b9255fa03 Waiting da9db072f522 Download complete da9db072f522 Download complete da9db072f522 Download complete 9fa9226be034 Pulling fs layer 1617e25568b2 Pulling fs layer 6ac0e4adf315 Pulling fs layer f3b09c502777 Pulling fs layer 408012a7b118 Pulling fs layer 44986281b8b9 Pulling fs layer bf70c5107ab5 Pulling fs layer 0d92cad902ba Download complete da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB da9db072f522 Extracting [> ] 65.54kB/3.624MB 1ccde423731d Pulling fs layer 7221d93db8a9 Pulling fs layer 1617e25568b2 Waiting 7df673c7455d Pulling fs layer 6ac0e4adf315 Waiting f3b09c502777 Waiting 408012a7b118 Waiting 7221d93db8a9 Waiting 44986281b8b9 Waiting 7df673c7455d Waiting bf70c5107ab5 Waiting 1ccde423731d Waiting 9fa9226be034 Waiting 2d429b9e73a6 Pulling fs layer 46eab5b44a35 Pulling fs layer c4d302cc468d Pulling fs layer 01e0882c90d9 Pulling fs layer 2d429b9e73a6 Waiting 46eab5b44a35 Waiting c4d302cc468d Waiting 531ee2cf3c0c Pulling fs layer ed54a7dee1d8 Pulling fs layer 12c5c803443f Pulling fs layer e27c75a98748 Pulling fs layer e73cb4a42719 Pulling fs layer a83b68436f09 Pulling fs layer 787d6bee9571 Pulling fs layer 13ff0988aaea Pulling fs layer 4b82842ab819 Pulling fs layer 7e568a0dc8fb Pulling fs layer 01e0882c90d9 Waiting e27c75a98748 Waiting 787d6bee9571 Waiting e73cb4a42719 Waiting a83b68436f09 Waiting 13ff0988aaea Waiting 4b82842ab819 Waiting 7e568a0dc8fb Waiting 12c5c803443f Waiting 531ee2cf3c0c Waiting ed54a7dee1d8 Waiting eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB eb7cda286a15 Verifying Checksum eb7cda286a15 Download complete dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB eca0188f477e Downloading [> ] 375.7kB/37.17MB 1e017ebebdbd Pulling fs layer 55f2b468da67 Pulling fs layer 82bfc142787e Pulling fs layer 46baca71a4ef Pulling fs layer b0e0ef7895f4 Pulling fs layer c0c90eeb8aca Pulling fs layer 5cfb27c10ea5 Pulling fs layer 40a5eed61bb0 Pulling fs layer e040ea11fa10 Pulling fs layer 09d5a3f70313 Pulling fs layer 82bfc142787e Waiting 356f5c2c843b Pulling fs layer 46baca71a4ef Waiting b0e0ef7895f4 Waiting 55f2b468da67 Waiting c0c90eeb8aca Waiting 1e017ebebdbd Waiting 40a5eed61bb0 Waiting 09d5a3f70313 Waiting 5cfb27c10ea5 Waiting e040ea11fa10 Waiting 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB 96e38c8865ba Downloading [========> ] 11.89MB/71.91MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB da9db072f522 Extracting [=====> ] 393.2kB/3.624MB dcc0c3b2850c Downloading [=====> ] 8.109MB/76.12MB eca0188f477e Downloading [===========> ] 8.666MB/37.17MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB 96e38c8865ba Downloading [===============> ] 22.17MB/71.91MB da9db072f522 Pull complete da9db072f522 Pull complete da9db072f522 Pull complete dcc0c3b2850c Downloading [===========> ] 17.84MB/76.12MB eca0188f477e Downloading [========================> ] 18.46MB/37.17MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB 96e38c8865ba Downloading [========================> ] 34.6MB/71.91MB dcc0c3b2850c Downloading [===================> ] 29.74MB/76.12MB eca0188f477e Downloading [========================================> ] 30.15MB/37.17MB 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB 96e38c8865ba Downloading [==============================> ] 44.33MB/71.91MB eca0188f477e Verifying Checksum eca0188f477e Download complete e444bcd4d577 Downloading [==================================================>] 279B/279B e444bcd4d577 Verifying Checksum e444bcd4d577 Download complete dcc0c3b2850c Downloading [==========================> ] 40.55MB/76.12MB 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB 96e38c8865ba Downloading [========================================> ] 57.85MB/71.91MB eabd8714fec9 Downloading [> ] 539.6kB/375MB eca0188f477e Extracting [> ] 393.2kB/37.17MB dcc0c3b2850c Downloading [====================================> ] 55.15MB/76.12MB 96e38c8865ba Verifying Checksum 96e38c8865ba Download complete 96e38c8865ba Download complete 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Verifying Checksum 45fd2fec8a19 Download complete eabd8714fec9 Downloading [=> ] 9.19MB/375MB f90c8eb4724c Pulling fs layer 2b1b549e99de Pulling fs layer 547372ea8ffa Pulling fs layer 65d25c0f02f3 Pulling fs layer 90dd78f85976 Pulling fs layer 4f4fb700ef54 Pulling fs layer f90c8eb4724c Waiting 2b1b549e99de Waiting 547372ea8ffa Waiting 65d25c0f02f3 Waiting 90dd78f85976 Waiting 4f4fb700ef54 Waiting 8f10199ed94b Downloading [> ] 97.22kB/8.768MB eca0188f477e Extracting [========> ] 6.291MB/37.17MB dcc0c3b2850c Downloading [=============================================> ] 69.2MB/76.12MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [> ] 557.1kB/71.91MB eabd8714fec9 Downloading [==> ] 20.54MB/375MB 8f10199ed94b Downloading [======================================> ] 6.782MB/8.768MB dcc0c3b2850c Verifying Checksum dcc0c3b2850c Download complete 8f10199ed94b Verifying Checksum 8f10199ed94b Download complete eca0188f477e Extracting [=============> ] 9.83MB/37.17MB f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB f963a77d2726 Download complete 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 79161a3f5362 Downloading [==================================================>] 4.656kB/4.656kB 79161a3f5362 Verifying Checksum 79161a3f5362 Download complete f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Verifying Checksum 9c266ba63f51 Download complete 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 96e38c8865ba Extracting [===> ] 5.014MB/71.91MB 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 2e8a7df9c2ee Verifying Checksum 2e8a7df9c2ee Download complete eabd8714fec9 Downloading [====> ] 35.14MB/375MB 10f05dd8b1db Downloading [==================================================>] 98B/98B 10f05dd8b1db Verifying Checksum 10f05dd8b1db Download complete 41dac8b43ba6 Downloading [==================================================>] 171B/171B 41dac8b43ba6 Verifying Checksum 41dac8b43ba6 Download complete eca0188f477e Extracting [====================> ] 14.94MB/37.17MB 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 71a9f6a9ab4d Verifying Checksum 71a9f6a9ab4d Download complete f3a82e9f1761 Downloading [========> ] 7.798MB/44.41MB da3ed5db7103 Downloading [> ] 539.6kB/127.4MB eabd8714fec9 Downloading [======> ] 51.36MB/375MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB 96e38c8865ba Extracting [======> ] 9.47MB/71.91MB eca0188f477e Extracting [=============================> ] 22.02MB/37.17MB f3a82e9f1761 Downloading [=====================> ] 18.81MB/44.41MB da3ed5db7103 Downloading [===> ] 9.19MB/127.4MB eabd8714fec9 Downloading [=========> ] 71.37MB/375MB 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB 96e38c8865ba Extracting [=========> ] 13.37MB/71.91MB eca0188f477e Extracting [===================================> ] 26.74MB/37.17MB f3a82e9f1761 Downloading [======================================> ] 33.95MB/44.41MB da3ed5db7103 Downloading [========> ] 20.54MB/127.4MB eabd8714fec9 Downloading [===========> ] 88.67MB/375MB 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB 96e38c8865ba Extracting [=============> ] 18.94MB/71.91MB eca0188f477e Extracting [===========================================> ] 32.24MB/37.17MB f3a82e9f1761 Verifying Checksum f3a82e9f1761 Download complete c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB c955f6e31a04 Verifying Checksum c955f6e31a04 Download complete da3ed5db7103 Downloading [===========> ] 30.28MB/127.4MB eabd8714fec9 Downloading [=============> ] 104.9MB/375MB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 96e38c8865ba Extracting [================> ] 23.4MB/71.91MB 19ede2622bd6 Downloading [> ] 539.6kB/71.91MB eca0188f477e Extracting [==============================================> ] 34.6MB/37.17MB da3ed5db7103 Downloading [=================> ] 44.33MB/127.4MB eabd8714fec9 Downloading [===============> ] 118.9MB/375MB 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 96e38c8865ba Extracting [===================> ] 27.85MB/71.91MB 19ede2622bd6 Downloading [=====> ] 8.109MB/71.91MB eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB eabd8714fec9 Downloading [=================> ] 131.9MB/375MB da3ed5db7103 Downloading [======================> ] 58.39MB/127.4MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 96e38c8865ba Extracting [======================> ] 32.87MB/71.91MB 19ede2622bd6 Downloading [=============> ] 19.46MB/71.91MB eca0188f477e Pull complete e444bcd4d577 Extracting [==================================================>] 279B/279B e444bcd4d577 Extracting [==================================================>] 279B/279B eabd8714fec9 Downloading [===================> ] 143.8MB/375MB da3ed5db7103 Downloading [===========================> ] 71.37MB/127.4MB 19ede2622bd6 Downloading [========================> ] 35.68MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB 96e38c8865ba Extracting [=========================> ] 36.77MB/71.91MB e444bcd4d577 Pull complete eabd8714fec9 Downloading [====================> ] 156.3MB/375MB da3ed5db7103 Downloading [================================> ] 82.72MB/127.4MB 19ede2622bd6 Downloading [=================================> ] 48.12MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB 96e38c8865ba Extracting [=============================> ] 42.34MB/71.91MB eabd8714fec9 Downloading [======================> ] 167.1MB/375MB da3ed5db7103 Downloading [====================================> ] 94.08MB/127.4MB 19ede2622bd6 Downloading [========================================> ] 58.39MB/71.91MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB 96e38c8865ba Extracting [=================================> ] 47.91MB/71.91MB eabd8714fec9 Downloading [=======================> ] 177.9MB/375MB da3ed5db7103 Downloading [=========================================> ] 104.9MB/127.4MB 19ede2622bd6 Downloading [================================================> ] 70.29MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 96e38c8865ba Extracting [====================================> ] 52.92MB/71.91MB 19ede2622bd6 Verifying Checksum 19ede2622bd6 Download complete 81f92f6326a0 Downloading [> ] 146.4kB/14.63MB eabd8714fec9 Downloading [=========================> ] 191.4MB/375MB da3ed5db7103 Downloading [==============================================> ] 118.4MB/127.4MB 19ede2622bd6 Extracting [> ] 557.1kB/71.91MB 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB 96e38c8865ba Extracting [========================================> ] 57.93MB/71.91MB 81f92f6326a0 Downloading [======================> ] 6.634MB/14.63MB da3ed5db7103 Verifying Checksum da3ed5db7103 Download complete eabd8714fec9 Downloading [===========================> ] 206MB/375MB 774184111a51 Downloading [==================================================>] 1.074kB/1.074kB 774184111a51 Verifying Checksum 774184111a51 Download complete ba3bfa42d232 Downloading [============================> ] 3.003kB/5.244kB ba3bfa42d232 Download complete 8e7191d1a9d6 Downloading [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Verifying Checksum 8e7191d1a9d6 Download complete 81f92f6326a0 Verifying Checksum 81f92f6326a0 Download complete 19ede2622bd6 Extracting [===> ] 5.571MB/71.91MB 43449fa9f0bf Download complete 25fd4437207e Downloading [=======> ] 3.002kB/19.52kB 25fd4437207e Download complete 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB 96e38c8865ba Extracting [============================================> ] 63.5MB/71.91MB e5d7009d9e55 Download complete 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 1ec5fb03eaee Downloading [==================================================>] 127kB/127kB 1ec5fb03eaee Verifying Checksum 1ec5fb03eaee Download complete eabd8714fec9 Downloading [=============================> ] 223.8MB/375MB d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB d3165a332ae3 Verifying Checksum 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 6394804c2196 Verifying Checksum 6394804c2196 Download complete c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 19ede2622bd6 Extracting [======> ] 10.03MB/71.91MB f18232174bc9 Downloading [> ] 48.06kB/3.642MB 96e38c8865ba Extracting [=================================================> ] 70.75MB/71.91MB 96e38c8865ba Extracting [=================================================> ] 70.75MB/71.91MB eabd8714fec9 Downloading [================================> ] 241.7MB/375MB f18232174bc9 Verifying Checksum f18232174bc9 Download complete f18232174bc9 Extracting [> ] 65.54kB/3.642MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB c124ba1a8b26 Downloading [===> ] 5.946MB/91.87MB e60d9caeb0b8 Downloading [==================================================>] 140B/140B e60d9caeb0b8 Verifying Checksum e60d9caeb0b8 Download complete 19ede2622bd6 Extracting [==========> ] 14.48MB/71.91MB f61a19743345 Downloading [> ] 48.06kB/3.524MB eabd8714fec9 Downloading [==================================> ] 259MB/375MB c124ba1a8b26 Downloading [=======> ] 13.52MB/91.87MB f61a19743345 Downloading [==================================================>] 3.524MB/3.524MB f61a19743345 Verifying Checksum f61a19743345 Download complete f18232174bc9 Extracting [=======> ] 524.3kB/3.642MB 19ede2622bd6 Extracting [==============> ] 20.61MB/71.91MB 96e38c8865ba Pull complete 96e38c8865ba Pull complete e5d7009d9e55 Extracting [==================================================>] 295B/295B 5e06c6bed798 Extracting [==================================================>] 296B/296B 5e06c6bed798 Extracting [==================================================>] 296B/296B e5d7009d9e55 Extracting [==================================================>] 295B/295B 8af57d8c9f49 Downloading [> ] 97.22kB/8.735MB eabd8714fec9 Downloading [====================================> ] 274.1MB/375MB c124ba1a8b26 Downloading [=============> ] 25.41MB/91.87MB f18232174bc9 Extracting [========================================> ] 2.949MB/3.642MB 19ede2622bd6 Extracting [================> ] 23.95MB/71.91MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 8af57d8c9f49 Downloading [===========================> ] 4.816MB/8.735MB 5e06c6bed798 Pull complete e5d7009d9e55 Pull complete 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB f18232174bc9 Pull complete 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB eabd8714fec9 Downloading [======================================> ] 290.3MB/375MB 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB e60d9caeb0b8 Extracting [==================================================>] 140B/140B e60d9caeb0b8 Extracting [==================================================>] 140B/140B 8af57d8c9f49 Verifying Checksum 8af57d8c9f49 Download complete c124ba1a8b26 Downloading [======================> ] 42.17MB/91.87MB c53a11b7c6fc Downloading [==> ] 3.01kB/58.08kB c53a11b7c6fc Download complete 19ede2622bd6 Extracting [==================> ] 27.3MB/71.91MB e032d0a5e409 Downloading [=====> ] 3.01kB/27.77kB e032d0a5e409 Verifying Checksum e032d0a5e409 Download complete c49e0ee60bfb Downloading [> ] 539.6kB/107.3MB eabd8714fec9 Downloading [========================================> ] 305.5MB/375MB c124ba1a8b26 Downloading [================================> ] 58.93MB/91.87MB 684be6598fc9 Pull complete 1ec5fb03eaee Pull complete 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB e60d9caeb0b8 Pull complete 19ede2622bd6 Extracting [======================> ] 31.75MB/71.91MB f61a19743345 Extracting [> ] 65.54kB/3.524MB c49e0ee60bfb Downloading [===> ] 7.568MB/107.3MB eabd8714fec9 Downloading [==========================================> ] 318.5MB/375MB c124ba1a8b26 Downloading [=========================================> ] 76.23MB/91.87MB 19ede2622bd6 Extracting [========================> ] 35.09MB/71.91MB f61a19743345 Extracting [========> ] 589.8kB/3.524MB d3165a332ae3 Pull complete c49e0ee60bfb Downloading [========> ] 17.3MB/107.3MB eabd8714fec9 Downloading [============================================> ] 333.1MB/375MB 0d92cad902ba Pull complete f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB c124ba1a8b26 Downloading [===============================================> ] 87.59MB/91.87MB f61a19743345 Extracting [==================================================>] 3.524MB/3.524MB c124ba1a8b26 Verifying Checksum c124ba1a8b26 Download complete 19ede2622bd6 Extracting [==========================> ] 38.44MB/71.91MB c49e0ee60bfb Downloading [=============> ] 28.65MB/107.3MB f61a19743345 Pull complete eabd8714fec9 Downloading [==============================================> ] 349.3MB/375MB 8af57d8c9f49 Extracting [> ] 98.3kB/8.735MB dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 384497dbce3b Downloading [> ] 539.6kB/63.48MB c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 19ede2622bd6 Extracting [============================> ] 41.22MB/71.91MB c49e0ee60bfb Downloading [====================> ] 44.33MB/107.3MB eabd8714fec9 Downloading [================================================> ] 366MB/375MB 384497dbce3b Downloading [=====> ] 7.028MB/63.48MB dcc0c3b2850c Extracting [======> ] 10.58MB/76.12MB 8af57d8c9f49 Extracting [==> ] 393.2kB/8.735MB c124ba1a8b26 Extracting [=====> ] 9.47MB/91.87MB eabd8714fec9 Verifying Checksum eabd8714fec9 Download complete 19ede2622bd6 Extracting [==============================> ] 44.56MB/71.91MB 055b9255fa03 Downloading [============> ] 3.01kB/11.92kB 055b9255fa03 Downloading [==================================================>] 11.92kB/11.92kB 055b9255fa03 Verifying Checksum 055b9255fa03 Download complete c49e0ee60bfb Downloading [==========================> ] 57.31MB/107.3MB b176d7edde70 Downloading [==================================================>] 1.227kB/1.227kB b176d7edde70 Verifying Checksum b176d7edde70 Download complete dcc0c3b2850c Extracting [=============> ] 20.61MB/76.12MB 384497dbce3b Downloading [========> ] 11.35MB/63.48MB 8af57d8c9f49 Extracting [====================> ] 3.637MB/8.735MB c124ba1a8b26 Extracting [========> ] 15.6MB/91.87MB 9fa9226be034 Downloading [> ] 15.3kB/783kB eabd8714fec9 Extracting [> ] 557.1kB/375MB 9fa9226be034 Downloading [==================================================>] 783kB/783kB 9fa9226be034 Download complete 9fa9226be034 Extracting [==> ] 32.77kB/783kB 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB c49e0ee60bfb Downloading [================================> ] 69.2MB/107.3MB 19ede2622bd6 Extracting [================================> ] 47.35MB/71.91MB 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 1617e25568b2 Verifying Checksum 1617e25568b2 Download complete dcc0c3b2850c Extracting [==================> ] 27.85MB/76.12MB 384497dbce3b Downloading [==================> ] 23.25MB/63.48MB 8af57d8c9f49 Extracting [====================================> ] 6.39MB/8.735MB 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB c124ba1a8b26 Extracting [===========> ] 21.73MB/91.87MB eabd8714fec9 Extracting [=> ] 11.7MB/375MB c49e0ee60bfb Downloading [======================================> ] 81.64MB/107.3MB 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 19ede2622bd6 Extracting [==================================> ] 49.58MB/71.91MB 9fa9226be034 Extracting [==================================================>] 783kB/783kB dcc0c3b2850c Extracting [=======================> ] 35.65MB/76.12MB 384497dbce3b Downloading [============================> ] 35.68MB/63.48MB 8af57d8c9f49 Extracting [==================================================>] 8.735MB/8.735MB 6ac0e4adf315 Downloading [===> ] 4.865MB/62.07MB c124ba1a8b26 Extracting [==============> ] 26.74MB/91.87MB eabd8714fec9 Extracting [==> ] 17.83MB/375MB c49e0ee60bfb Downloading [===========================================> ] 94.08MB/107.3MB 9fa9226be034 Pull complete 8af57d8c9f49 Pull complete c53a11b7c6fc Extracting [============================> ] 32.77kB/58.08kB c53a11b7c6fc Extracting [==================================================>] 58.08kB/58.08kB 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB dcc0c3b2850c Extracting [===========================> ] 41.22MB/76.12MB 384497dbce3b Downloading [========================================> ] 50.82MB/63.48MB 6ac0e4adf315 Downloading [=============> ] 16.22MB/62.07MB 19ede2622bd6 Extracting [====================================> ] 51.81MB/71.91MB c124ba1a8b26 Extracting [================> ] 31.2MB/91.87MB c49e0ee60bfb Verifying Checksum c49e0ee60bfb Download complete eabd8714fec9 Extracting [==> ] 22.28MB/375MB f3b09c502777 Downloading [> ] 539.6kB/56.52MB 384497dbce3b Verifying Checksum 384497dbce3b Download complete dcc0c3b2850c Extracting [===============================> ] 47.91MB/76.12MB 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 6ac0e4adf315 Downloading [===================> ] 23.79MB/62.07MB 408012a7b118 Downloading [==================================================>] 637B/637B 408012a7b118 Verifying Checksum 408012a7b118 Download complete 19ede2622bd6 Extracting [=====================================> ] 54.59MB/71.91MB c124ba1a8b26 Extracting [=====================> ] 40.11MB/91.87MB 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 44986281b8b9 Verifying Checksum 44986281b8b9 Download complete bf70c5107ab5 Download complete c53a11b7c6fc Pull complete e032d0a5e409 Extracting [==================================================>] 27.77kB/27.77kB 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 1ccde423731d Download complete eabd8714fec9 Extracting [===> ] 23.95MB/375MB 7221d93db8a9 Downloading [==================================================>] 100B/100B 7221d93db8a9 Verifying Checksum 7221d93db8a9 Download complete f3b09c502777 Downloading [=========> ] 10.81MB/56.52MB 7df673c7455d Download complete 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 6ac0e4adf315 Downloading [==============================> ] 37.85MB/62.07MB 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB dcc0c3b2850c Extracting [===================================> ] 54.59MB/76.12MB 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB c124ba1a8b26 Extracting [==========================> ] 47.91MB/91.87MB 19ede2622bd6 Extracting [=======================================> ] 57.38MB/71.91MB eabd8714fec9 Extracting [====> ] 30.64MB/375MB 6ac0e4adf315 Downloading [=======================================> ] 48.66MB/62.07MB f3b09c502777 Downloading [================> ] 18.92MB/56.52MB dcc0c3b2850c Extracting [=======================================> ] 59.6MB/76.12MB c124ba1a8b26 Extracting [=============================> ] 54.59MB/91.87MB 2d429b9e73a6 Downloading [===========> ] 6.782MB/29.13MB 19ede2622bd6 Extracting [==========================================> ] 61.83MB/71.91MB e032d0a5e409 Pull complete 1617e25568b2 Pull complete eabd8714fec9 Extracting [=====> ] 38.99MB/375MB 6ac0e4adf315 Download complete 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Verifying Checksum 46eab5b44a35 Download complete f3b09c502777 Downloading [========================> ] 27.57MB/56.52MB dcc0c3b2850c Extracting [============================================> ] 67.4MB/76.12MB c4d302cc468d Downloading [> ] 48.06kB/4.534MB 2d429b9e73a6 Downloading [===========================> ] 16.22MB/29.13MB c124ba1a8b26 Extracting [===================================> ] 65.73MB/91.87MB 19ede2622bd6 Extracting [=============================================> ] 65.73MB/71.91MB c49e0ee60bfb Extracting [> ] 557.1kB/107.3MB eabd8714fec9 Extracting [======> ] 48.46MB/375MB c4d302cc468d Verifying Checksum c4d302cc468d Download complete dcc0c3b2850c Extracting [===============================================> ] 72.97MB/76.12MB f3b09c502777 Downloading [==================================> ] 39.47MB/56.52MB 2d429b9e73a6 Verifying Checksum 2d429b9e73a6 Download complete 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB c124ba1a8b26 Extracting [=======================================> ] 72.97MB/91.87MB 19ede2622bd6 Extracting [================================================> ] 69.07MB/71.91MB 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB c49e0ee60bfb Extracting [=> ] 3.342MB/107.3MB 01e0882c90d9 Verifying Checksum dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB eabd8714fec9 Extracting [=======> ] 54.03MB/375MB ed54a7dee1d8 Verifying Checksum ed54a7dee1d8 Download complete f3b09c502777 Downloading [================================================> ] 54.61MB/56.52MB dcc0c3b2850c Pull complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 12c5c803443f Downloading [==================================================>] 116B/116B 12c5c803443f Verifying Checksum 12c5c803443f Download complete eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB f3b09c502777 Verifying Checksum f3b09c502777 Download complete e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB e27c75a98748 Verifying Checksum e27c75a98748 Download complete a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 531ee2cf3c0c Downloading [=================================================> ] 8.027MB/8.066MB 6ac0e4adf315 Extracting [====> ] 5.014MB/62.07MB a83b68436f09 Verifying Checksum a83b68436f09 Download complete c124ba1a8b26 Extracting [==========================================> ] 78.54MB/91.87MB 531ee2cf3c0c Verifying Checksum 531ee2cf3c0c Download complete 13ff0988aaea Downloading [==================================================>] 167B/167B 13ff0988aaea Verifying Checksum 13ff0988aaea Download complete e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 787d6bee9571 Downloading [==================================================>] 127B/127B 787d6bee9571 Verifying Checksum 787d6bee9571 Download complete c49e0ee60bfb Extracting [==> ] 5.571MB/107.3MB 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 4b82842ab819 Verifying Checksum 4b82842ab819 Download complete 7e568a0dc8fb Downloading [==================================================>] 184B/184B 7e568a0dc8fb Verifying Checksum 7e568a0dc8fb Download complete 19ede2622bd6 Extracting [==================================================>] 71.91MB/71.91MB eabd8714fec9 Extracting [=======> ] 59.6MB/375MB 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 6ac0e4adf315 Extracting [======> ] 8.356MB/62.07MB e73cb4a42719 Downloading [====> ] 9.19MB/109.1MB eabd8714fec9 Extracting [========> ] 67.4MB/375MB c124ba1a8b26 Extracting [=============================================> ] 84.12MB/91.87MB c49e0ee60bfb Extracting [===> ] 7.242MB/107.3MB 2d429b9e73a6 Extracting [====> ] 2.359MB/29.13MB 19ede2622bd6 Pull complete eb7cda286a15 Pull complete 1e017ebebdbd Downloading [====> ] 3.39MB/37.19MB 55f2b468da67 Downloading [=> ] 8.109MB/257.9MB 81f92f6326a0 Extracting [> ] 163.8kB/14.63MB api Pulled 6ac0e4adf315 Extracting [========> ] 11.14MB/62.07MB e73cb4a42719 Downloading [========> ] 19.46MB/109.1MB c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 2d429b9e73a6 Extracting [==========> ] 6.193MB/29.13MB eabd8714fec9 Extracting [=========> ] 71.86MB/375MB 1e017ebebdbd Downloading [================> ] 12.43MB/37.19MB 55f2b468da67 Downloading [===> ] 16.76MB/257.9MB 81f92f6326a0 Extracting [=> ] 491.5kB/14.63MB c49e0ee60bfb Extracting [====> ] 10.03MB/107.3MB c124ba1a8b26 Pull complete 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 6ac0e4adf315 Extracting [===========> ] 13.93MB/62.07MB e73cb4a42719 Downloading [=============> ] 30.28MB/109.1MB eabd8714fec9 Extracting [==========> ] 81.33MB/375MB 2d429b9e73a6 Extracting [==============> ] 8.552MB/29.13MB 1e017ebebdbd Downloading [==============================> ] 22.99MB/37.19MB 55f2b468da67 Downloading [=====> ] 27.03MB/257.9MB 81f92f6326a0 Extracting [===========> ] 3.277MB/14.63MB c49e0ee60bfb Extracting [=====> ] 12.81MB/107.3MB e73cb4a42719 Downloading [===================> ] 41.63MB/109.1MB 6394804c2196 Pull complete 6ac0e4adf315 Extracting [=============> ] 16.71MB/62.07MB eabd8714fec9 Extracting [===========> ] 86.34MB/375MB 2d429b9e73a6 Extracting [==================> ] 10.62MB/29.13MB 1e017ebebdbd Downloading [==========================================> ] 31.28MB/37.19MB 81f92f6326a0 Extracting [=================> ] 5.079MB/14.63MB 55f2b468da67 Downloading [======> ] 35.68MB/257.9MB pap Pulled c49e0ee60bfb Extracting [=======> ] 15.04MB/107.3MB 1e017ebebdbd Verifying Checksum 1e017ebebdbd Download complete e73cb4a42719 Downloading [=======================> ] 50.28MB/109.1MB 82bfc142787e Downloading [> ] 97.22kB/8.613MB 6ac0e4adf315 Extracting [===============> ] 19.5MB/62.07MB eabd8714fec9 Extracting [============> ] 92.47MB/375MB 2d429b9e73a6 Extracting [========================> ] 14.16MB/29.13MB 55f2b468da67 Downloading [=========> ] 48.12MB/257.9MB 81f92f6326a0 Extracting [==========================> ] 7.7MB/14.63MB c49e0ee60bfb Extracting [=======> ] 16.71MB/107.3MB e73cb4a42719 Downloading [============================> ] 61.64MB/109.1MB 82bfc142787e Downloading [==========================> ] 4.521MB/8.613MB 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 2d429b9e73a6 Extracting [==============================> ] 17.99MB/29.13MB 6ac0e4adf315 Extracting [==================> ] 23.4MB/62.07MB 55f2b468da67 Downloading [============> ] 63.8MB/257.9MB eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 81f92f6326a0 Extracting [=============================> ] 8.52MB/14.63MB 82bfc142787e Verifying Checksum 82bfc142787e Download complete 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 46baca71a4ef Verifying Checksum 46baca71a4ef Download complete e73cb4a42719 Downloading [===================================> ] 77.32MB/109.1MB b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 1e017ebebdbd Extracting [====> ] 3.539MB/37.19MB 2d429b9e73a6 Extracting [==================================> ] 20.35MB/29.13MB c49e0ee60bfb Extracting [========> ] 17.83MB/107.3MB 55f2b468da67 Downloading [==============> ] 76.23MB/257.9MB 81f92f6326a0 Extracting [=====================================> ] 10.98MB/14.63MB eabd8714fec9 Extracting [=============> ] 104.7MB/375MB 6ac0e4adf315 Extracting [===================> ] 24.51MB/62.07MB e73cb4a42719 Downloading [========================================> ] 89.21MB/109.1MB b0e0ef7895f4 Downloading [=======> ] 5.275MB/37.01MB 1e017ebebdbd Extracting [=======> ] 5.898MB/37.19MB 2d429b9e73a6 Extracting [======================================> ] 22.71MB/29.13MB 55f2b468da67 Downloading [================> ] 87.59MB/257.9MB c49e0ee60bfb Extracting [=========> ] 20.05MB/107.3MB eabd8714fec9 Extracting [==============> ] 107MB/375MB 81f92f6326a0 Extracting [=========================================> ] 12.12MB/14.63MB 6ac0e4adf315 Extracting [======================> ] 27.85MB/62.07MB e73cb4a42719 Downloading [=============================================> ] 100MB/109.1MB b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB 1e017ebebdbd Extracting [============> ] 9.044MB/37.19MB 81f92f6326a0 Extracting [==================================================>] 14.63MB/14.63MB 55f2b468da67 Downloading [===================> ] 100.6MB/257.9MB c49e0ee60bfb Extracting [==========> ] 23.4MB/107.3MB eabd8714fec9 Extracting [==============> ] 109.7MB/375MB e73cb4a42719 Verifying Checksum e73cb4a42719 Download complete 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Verifying Checksum c0c90eeb8aca Download complete b0e0ef7895f4 Downloading [=========================> ] 19.22MB/37.01MB 55f2b468da67 Downloading [======================> ] 115.2MB/257.9MB 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB c49e0ee60bfb Extracting [============> ] 27.3MB/107.3MB 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 5cfb27c10ea5 Verifying Checksum 5cfb27c10ea5 Download complete eabd8714fec9 Extracting [===============> ] 112.5MB/375MB 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 40a5eed61bb0 Downloading [==================================================>] 98B/98B 40a5eed61bb0 Verifying Checksum 40a5eed61bb0 Download complete 6ac0e4adf315 Extracting [==========================> ] 32.87MB/62.07MB b0e0ef7895f4 Downloading [========================================> ] 30.15MB/37.01MB e040ea11fa10 Downloading [==================================================>] 173B/173B e040ea11fa10 Verifying Checksum e040ea11fa10 Download complete 55f2b468da67 Downloading [=========================> ] 129.2MB/257.9MB 1e017ebebdbd Extracting [================> ] 12.19MB/37.19MB c49e0ee60bfb Extracting [==============> ] 30.64MB/107.3MB eabd8714fec9 Extracting [===============> ] 117MB/375MB b0e0ef7895f4 Verifying Checksum b0e0ef7895f4 Download complete 2d429b9e73a6 Extracting [===============================================> ] 27.43MB/29.13MB 6ac0e4adf315 Extracting [==================================> ] 42.89MB/62.07MB 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 356f5c2c843b Verifying Checksum 356f5c2c843b Download complete f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 55f2b468da67 Downloading [===========================> ] 140.6MB/257.9MB 1e017ebebdbd Extracting [=====================> ] 15.73MB/37.19MB c49e0ee60bfb Extracting [===============> ] 33.98MB/107.3MB 81f92f6326a0 Pull complete eabd8714fec9 Extracting [===============> ] 119.8MB/375MB 6ac0e4adf315 Extracting [=======================================> ] 49.58MB/62.07MB 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 774184111a51 Extracting [==================================================>] 1.074kB/1.074kB 09d5a3f70313 Downloading [=> ] 2.702MB/109.2MB f90c8eb4724c Downloading [========> ] 4.98MB/30.59MB 55f2b468da67 Downloading [=============================> ] 154.1MB/257.9MB 1e017ebebdbd Extracting [========================> ] 18.48MB/37.19MB 6ac0e4adf315 Extracting [===============================================> ] 58.49MB/62.07MB eabd8714fec9 Extracting [================> ] 123.7MB/375MB 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB c49e0ee60bfb Extracting [=================> ] 36.77MB/107.3MB 09d5a3f70313 Downloading [==> ] 5.406MB/109.2MB f90c8eb4724c Downloading [====================> ] 12.76MB/30.59MB 55f2b468da67 Downloading [===============================> ] 164.4MB/257.9MB 1e017ebebdbd Extracting [==============================> ] 22.81MB/37.19MB eabd8714fec9 Extracting [================> ] 125.9MB/375MB 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 774184111a51 Pull complete c49e0ee60bfb Extracting [==================> ] 39.55MB/107.3MB ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 09d5a3f70313 Downloading [======> ] 14.06MB/109.2MB ba3bfa42d232 Extracting [==================================================>] 5.244kB/5.244kB 6ac0e4adf315 Extracting [=================================================> ] 61.83MB/62.07MB 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB f90c8eb4724c Downloading [===================================> ] 21.79MB/30.59MB 55f2b468da67 Downloading [==================================> ] 177.9MB/257.9MB 1e017ebebdbd Extracting [===================================> ] 26.35MB/37.19MB 2d429b9e73a6 Pull complete eabd8714fec9 Extracting [=================> ] 129.2MB/375MB c49e0ee60bfb Extracting [====================> ] 43.45MB/107.3MB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 6ac0e4adf315 Pull complete 09d5a3f70313 Downloading [===========> ] 24.33MB/109.2MB f90c8eb4724c Verifying Checksum f90c8eb4724c Download complete 55f2b468da67 Downloading [=====================================> ] 191.4MB/257.9MB 1e017ebebdbd Extracting [=======================================> ] 29.49MB/37.19MB ba3bfa42d232 Pull complete 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB 8e7191d1a9d6 Extracting [==================================================>] 1.037kB/1.037kB c49e0ee60bfb Extracting [=====================> ] 46.79MB/107.3MB eabd8714fec9 Extracting [=================> ] 132MB/375MB 09d5a3f70313 Downloading [================> ] 36.76MB/109.2MB f3b09c502777 Extracting [> ] 557.1kB/56.52MB 46eab5b44a35 Pull complete c4d302cc468d Extracting [> ] 65.54kB/4.534MB 55f2b468da67 Downloading [========================================> ] 207.6MB/257.9MB 1e017ebebdbd Extracting [===========================================> ] 32.64MB/37.19MB f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 8e7191d1a9d6 Pull complete 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 43449fa9f0bf Extracting [==================================================>] 1.037kB/1.037kB 09d5a3f70313 Downloading [========================> ] 52.44MB/109.2MB c49e0ee60bfb Extracting [======================> ] 49.02MB/107.3MB eabd8714fec9 Extracting [==================> ] 136.5MB/375MB f3b09c502777 Extracting [==> ] 2.785MB/56.52MB c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 1e017ebebdbd Extracting [=============================================> ] 34.21MB/37.19MB 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB f90c8eb4724c Extracting [====> ] 2.949MB/30.59MB 09d5a3f70313 Downloading [=============================> ] 65.42MB/109.2MB eabd8714fec9 Extracting [==================> ] 138.1MB/375MB c49e0ee60bfb Extracting [========================> ] 51.81MB/107.3MB f3b09c502777 Extracting [====> ] 5.014MB/56.52MB c4d302cc468d Extracting [==========================================> ] 3.867MB/4.534MB 55f2b468da67 Downloading [=============================================> ] 235.7MB/257.9MB 43449fa9f0bf Pull complete f90c8eb4724c Extracting [==========> ] 6.226MB/30.59MB 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB 25fd4437207e Extracting [==================================================>] 19.52kB/19.52kB c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 09d5a3f70313 Downloading [===================================> ] 77.86MB/109.2MB c4d302cc468d Pull complete 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB eabd8714fec9 Extracting [==================> ] 140.9MB/375MB f3b09c502777 Extracting [======> ] 7.242MB/56.52MB c49e0ee60bfb Extracting [========================> ] 53.48MB/107.3MB 55f2b468da67 Downloading [================================================> ] 247.6MB/257.9MB 2b1b549e99de Verifying Checksum 2b1b549e99de Download complete f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 09d5a3f70313 Downloading [======================================> ] 84.34MB/109.2MB 1e017ebebdbd Pull complete 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB eabd8714fec9 Extracting [===================> ] 143.2MB/375MB f3b09c502777 Extracting [=======> ] 8.913MB/56.52MB 55f2b468da67 Downloading [=================================================> ] 253MB/257.9MB c49e0ee60bfb Extracting [=========================> ] 55.71MB/107.3MB 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 55f2b468da67 Verifying Checksum 55f2b468da67 Download complete 01e0882c90d9 Pull complete 25fd4437207e Pull complete 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB f90c8eb4724c Extracting [=================> ] 10.49MB/30.59MB 09d5a3f70313 Downloading [===========================================> ] 95.16MB/109.2MB policy-db-migrator Pulled 547372ea8ffa Downloading [> ] 130kB/12.63MB eabd8714fec9 Extracting [===================> ] 145.9MB/375MB f3b09c502777 Extracting [==========> ] 11.7MB/56.52MB c49e0ee60bfb Extracting [===========================> ] 59.05MB/107.3MB f90c8eb4724c Extracting [========================> ] 15.07MB/30.59MB 09d5a3f70313 Downloading [=================================================> ] 107.6MB/109.2MB 09d5a3f70313 Verifying Checksum 09d5a3f70313 Download complete 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 547372ea8ffa Downloading [=============================> ] 7.339MB/12.63MB f3b09c502777 Extracting [============> ] 13.93MB/56.52MB 55f2b468da67 Extracting [> ] 557.1kB/257.9MB eabd8714fec9 Extracting [===================> ] 149.3MB/375MB c49e0ee60bfb Extracting [=============================> ] 62.39MB/107.3MB 547372ea8ffa Verifying Checksum 547372ea8ffa Download complete 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 4f4fb700ef54 Downloading [==================================================>] 32B/32B 4f4fb700ef54 Verifying Checksum 4f4fb700ef54 Download complete f90c8eb4724c Extracting [=============================> ] 18.35MB/30.59MB 531ee2cf3c0c Extracting [==========================> ] 4.325MB/8.066MB 55f2b468da67 Extracting [==> ] 11.7MB/257.9MB eabd8714fec9 Extracting [====================> ] 152.1MB/375MB f3b09c502777 Extracting [==============> ] 16.71MB/56.52MB 65d25c0f02f3 Downloading [================> ] 9.731MB/28.98MB c49e0ee60bfb Extracting [==============================> ] 65.18MB/107.3MB f90c8eb4724c Extracting [===================================> ] 21.63MB/30.59MB 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 531ee2cf3c0c Extracting [===================================> ] 5.8MB/8.066MB 55f2b468da67 Extracting [===> ] 20.05MB/257.9MB eabd8714fec9 Extracting [====================> ] 154.9MB/375MB 65d25c0f02f3 Downloading [=================================> ] 19.46MB/28.98MB f3b09c502777 Extracting [=================> ] 19.5MB/56.52MB c49e0ee60bfb Extracting [===============================> ] 67.4MB/107.3MB f90c8eb4724c Extracting [=======================================> ] 24.25MB/30.59MB 90dd78f85976 Downloading [=========> ] 8.093MB/41.49MB 531ee2cf3c0c Extracting [==============================================> ] 7.569MB/8.066MB 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 55f2b468da67 Extracting [====> ] 22.84MB/257.9MB eabd8714fec9 Extracting [====================> ] 157.1MB/375MB 65d25c0f02f3 Downloading [================================================> ] 28.02MB/28.98MB f3b09c502777 Extracting [==================> ] 20.61MB/56.52MB c49e0ee60bfb Extracting [================================> ] 69.63MB/107.3MB 65d25c0f02f3 Verifying Checksum 65d25c0f02f3 Download complete 90dd78f85976 Downloading [======================> ] 18.74MB/41.49MB f3b09c502777 Extracting [====================> ] 22.84MB/56.52MB eabd8714fec9 Extracting [=====================> ] 158.8MB/375MB c49e0ee60bfb Extracting [=================================> ] 71.3MB/107.3MB 90dd78f85976 Downloading [============================> ] 23.85MB/41.49MB f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB c49e0ee60bfb Extracting [==================================> ] 73.53MB/107.3MB 90dd78f85976 Downloading [=========================================> ] 34.5MB/41.49MB eabd8714fec9 Extracting [=====================> ] 161MB/375MB 531ee2cf3c0c Pull complete 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB f90c8eb4724c Extracting [============================================> ] 27.2MB/30.59MB f3b09c502777 Extracting [========================> ] 27.85MB/56.52MB 90dd78f85976 Verifying Checksum 90dd78f85976 Download complete c49e0ee60bfb Extracting [===================================> ] 75.76MB/107.3MB eabd8714fec9 Extracting [=====================> ] 163.2MB/375MB 55f2b468da67 Extracting [======> ] 31.75MB/257.9MB f3b09c502777 Extracting [=================================> ] 37.32MB/56.52MB f90c8eb4724c Extracting [===============================================> ] 29.16MB/30.59MB c49e0ee60bfb Extracting [====================================> ] 78.54MB/107.3MB ed54a7dee1d8 Extracting [============> ] 294.9kB/1.196MB eabd8714fec9 Extracting [======================> ] 165.4MB/375MB 55f2b468da67 Extracting [=======> ] 37.32MB/257.9MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB f3b09c502777 Extracting [======================================> ] 43.45MB/56.52MB f90c8eb4724c Extracting [=================================================> ] 30.15MB/30.59MB c49e0ee60bfb Extracting [=====================================> ] 81.33MB/107.3MB eabd8714fec9 Extracting [======================> ] 168.8MB/375MB 55f2b468da67 Extracting [========> ] 45.12MB/257.9MB eabd8714fec9 Extracting [======================> ] 171MB/375MB f3b09c502777 Extracting [============================================> ] 50.69MB/56.52MB c49e0ee60bfb Extracting [======================================> ] 83MB/107.3MB f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 55f2b468da67 Extracting [=========> ] 47.35MB/257.9MB eabd8714fec9 Extracting [========================> ] 182.7MB/375MB c49e0ee60bfb Extracting [========================================> ] 86.34MB/107.3MB f3b09c502777 Extracting [================================================> ] 55.15MB/56.52MB 55f2b468da67 Extracting [==========> ] 55.71MB/257.9MB eabd8714fec9 Extracting [=========================> ] 190.5MB/375MB c49e0ee60bfb Extracting [============================================> ] 94.7MB/107.3MB f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 55f2b468da67 Extracting [============> ] 64.06MB/257.9MB eabd8714fec9 Extracting [==========================> ] 200MB/375MB 55f2b468da67 Extracting [==============> ] 75.76MB/257.9MB c49e0ee60bfb Extracting [===============================================> ] 100.8MB/107.3MB eabd8714fec9 Extracting [============================> ] 211.7MB/375MB 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB c49e0ee60bfb Extracting [================================================> ] 103.6MB/107.3MB eabd8714fec9 Extracting [=============================> ] 217.8MB/375MB 55f2b468da67 Extracting [===================> ] 98.04MB/257.9MB c49e0ee60bfb Extracting [================================================> ] 104.7MB/107.3MB 55f2b468da67 Extracting [====================> ] 105.8MB/257.9MB eabd8714fec9 Extracting [=============================> ] 221.7MB/375MB c49e0ee60bfb Extracting [==================================================>] 107.3MB/107.3MB 55f2b468da67 Extracting [====================> ] 108.1MB/257.9MB eabd8714fec9 Extracting [=============================> ] 222.8MB/375MB 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB eabd8714fec9 Extracting [==============================> ] 226.2MB/375MB 55f2b468da67 Extracting [=======================> ] 120.3MB/257.9MB eabd8714fec9 Extracting [==============================> ] 231.7MB/375MB 55f2b468da67 Extracting [========================> ] 127MB/257.9MB eabd8714fec9 Extracting [===============================> ] 236.7MB/375MB eabd8714fec9 Extracting [===============================> ] 237.9MB/375MB 55f2b468da67 Extracting [========================> ] 128.1MB/257.9MB eabd8714fec9 Extracting [================================> ] 244MB/375MB 55f2b468da67 Extracting [=========================> ] 133.1MB/257.9MB 55f2b468da67 Extracting [==========================> ] 139.3MB/257.9MB eabd8714fec9 Extracting [=================================> ] 249.6MB/375MB eabd8714fec9 Extracting [==================================> ] 255.1MB/375MB 55f2b468da67 Extracting [===========================> ] 144.3MB/257.9MB 55f2b468da67 Extracting [============================> ] 149.3MB/257.9MB eabd8714fec9 Extracting [==================================> ] 261.3MB/375MB ed54a7dee1d8 Pull complete 55f2b468da67 Extracting [=============================> ] 150.4MB/257.9MB eabd8714fec9 Extracting [==================================> ] 262.4MB/375MB f90c8eb4724c Pull complete f3b09c502777 Pull complete eabd8714fec9 Extracting [===================================> ] 262.9MB/375MB 55f2b468da67 Extracting [=============================> ] 151.5MB/257.9MB eabd8714fec9 Extracting [===================================> ] 265.2MB/375MB 55f2b468da67 Extracting [==============================> ] 156MB/257.9MB 55f2b468da67 Extracting [==============================> ] 159.9MB/257.9MB eabd8714fec9 Extracting [===================================> ] 269.1MB/375MB 55f2b468da67 Extracting [================================> ] 165.4MB/257.9MB eabd8714fec9 Extracting [====================================> ] 270.7MB/375MB eabd8714fec9 Extracting [====================================> ] 271.8MB/375MB 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 12c5c803443f Extracting [==================================================>] 116B/116B 12c5c803443f Extracting [==================================================>] 116B/116B 55f2b468da67 Extracting [=================================> ] 172.1MB/257.9MB eabd8714fec9 Extracting [====================================> ] 273.5MB/375MB c49e0ee60bfb Pull complete 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB eabd8714fec9 Extracting [====================================> ] 274.1MB/375MB 408012a7b118 Extracting [==================================================>] 637B/637B 408012a7b118 Extracting [==================================================>] 637B/637B eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB eabd8714fec9 Extracting [=====================================> ] 279.6MB/375MB 55f2b468da67 Extracting [==================================> ] 176MB/257.9MB eabd8714fec9 Extracting [=====================================> ] 284.7MB/375MB 55f2b468da67 Extracting [==================================> ] 179.4MB/257.9MB eabd8714fec9 Extracting [======================================> ] 289.7MB/375MB 55f2b468da67 Extracting [====================================> ] 186.1MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 293.6MB/375MB 55f2b468da67 Extracting [====================================> ] 190MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 294.7MB/375MB 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB 55f2b468da67 Extracting [=====================================> ] 195.5MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB eabd8714fec9 Extracting [=======================================> ] 299.1MB/375MB eabd8714fec9 Extracting [========================================> ] 301.9MB/375MB 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 2b1b549e99de Extracting [> ] 32.77kB/2.646MB eabd8714fec9 Extracting [========================================> ] 304.2MB/375MB 55f2b468da67 Extracting [=======================================> ] 202.2MB/257.9MB 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 55f2b468da67 Extracting [=======================================> ] 202.8MB/257.9MB eabd8714fec9 Extracting [========================================> ] 304.7MB/375MB 384497dbce3b Extracting [> ] 557.1kB/63.48MB 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 55f2b468da67 Extracting [=======================================> ] 204.4MB/257.9MB 12c5c803443f Pull complete 408012a7b118 Pull complete 55f2b468da67 Extracting [=======================================> ] 205MB/257.9MB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 2b1b549e99de Pull complete eabd8714fec9 Extracting [========================================> ] 306.4MB/375MB 384497dbce3b Extracting [> ] 1.114MB/63.48MB 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 55f2b468da67 Extracting [=======================================> ] 205.6MB/257.9MB e27c75a98748 Pull complete eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 547372ea8ffa Extracting [=> ] 262.1kB/12.63MB 44986281b8b9 Pull complete bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 55f2b468da67 Extracting [========================================> ] 206.7MB/257.9MB 384497dbce3b Extracting [=> ] 1.671MB/63.48MB e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 547372ea8ffa Extracting [===============> ] 3.801MB/12.63MB eabd8714fec9 Extracting [=========================================> ] 310.3MB/375MB 547372ea8ffa Extracting [==============================> ] 7.602MB/12.63MB e73cb4a42719 Extracting [=> ] 2.228MB/109.1MB 384497dbce3b Extracting [=> ] 2.228MB/63.48MB 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB eabd8714fec9 Extracting [=========================================> ] 311.4MB/375MB e73cb4a42719 Extracting [==> ] 5.014MB/109.1MB 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB e73cb4a42719 Extracting [===> ] 6.685MB/109.1MB eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 55f2b468da67 Extracting [========================================> ] 208.3MB/257.9MB e73cb4a42719 Extracting [===> ] 8.356MB/109.1MB bf70c5107ab5 Pull complete 384497dbce3b Extracting [==> ] 2.785MB/63.48MB 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 547372ea8ffa Pull complete 55f2b468da67 Extracting [========================================> ] 210.6MB/257.9MB eabd8714fec9 Extracting [=========================================> ] 313.6MB/375MB 384497dbce3b Extracting [===> ] 3.899MB/63.48MB e73cb4a42719 Extracting [=====> ] 11.7MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 211.7MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 315.3MB/375MB 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB 384497dbce3b Extracting [===> ] 4.456MB/63.48MB 65d25c0f02f3 Extracting [=======> ] 4.424MB/28.98MB eabd8714fec9 Extracting [==========================================> ] 317.5MB/375MB e73cb4a42719 Extracting [=======> ] 17.27MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 213.4MB/257.9MB 65d25c0f02f3 Extracting [============> ] 7.373MB/28.98MB 384497dbce3b Extracting [===> ] 5.014MB/63.48MB e73cb4a42719 Extracting [========> ] 18.38MB/109.1MB 1ccde423731d Pull complete eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 55f2b468da67 Extracting [=========================================> ] 214.5MB/257.9MB 65d25c0f02f3 Extracting [==================> ] 10.62MB/28.98MB 384497dbce3b Extracting [=====> ] 7.242MB/63.48MB e73cb4a42719 Extracting [=========> ] 20.61MB/109.1MB 65d25c0f02f3 Extracting [=========================> ] 14.75MB/28.98MB 65d25c0f02f3 Extracting [======================================> ] 22.41MB/28.98MB e73cb4a42719 Extracting [=========> ] 21.73MB/109.1MB 55f2b468da67 Extracting [=========================================> ] 216.1MB/257.9MB eabd8714fec9 Extracting [==========================================> ] 322MB/375MB 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 7221d93db8a9 Extracting [==================================================>] 100B/100B 7221d93db8a9 Extracting [==================================================>] 100B/100B e73cb4a42719 Extracting [===========> ] 24.51MB/109.1MB eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 384497dbce3b Extracting [======> ] 7.799MB/63.48MB 55f2b468da67 Extracting [==========================================> ] 218.4MB/257.9MB 65d25c0f02f3 Pull complete eabd8714fec9 Extracting [===========================================> ] 324.8MB/375MB e73cb4a42719 Extracting [============> ] 26.18MB/109.1MB 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 55f2b468da67 Extracting [==========================================> ] 221.2MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 384497dbce3b Extracting [=======> ] 9.47MB/63.48MB 90dd78f85976 Extracting [> ] 426kB/41.49MB e73cb4a42719 Extracting [===============> ] 32.87MB/109.1MB 90dd78f85976 Extracting [======> ] 5.112MB/41.49MB e73cb4a42719 Extracting [==================> ] 40.67MB/109.1MB 90dd78f85976 Extracting [============> ] 10.65MB/41.49MB e73cb4a42719 Extracting [======================> ] 48.46MB/109.1MB 90dd78f85976 Extracting [=================> ] 14.91MB/41.49MB 55f2b468da67 Extracting [===========================================> ] 222.3MB/257.9MB 384497dbce3b Extracting [=======> ] 10.03MB/63.48MB 7221d93db8a9 Pull complete 7df673c7455d Extracting [==================================================>] 694B/694B eabd8714fec9 Extracting [===========================================> ] 328.1MB/375MB 7df673c7455d Extracting [==================================================>] 694B/694B e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 55f2b468da67 Extracting [===========================================> ] 223.4MB/257.9MB 90dd78f85976 Extracting [====================> ] 17.04MB/41.49MB 384497dbce3b Extracting [========> ] 11.14MB/63.48MB e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 90dd78f85976 Extracting [========================> ] 20.45MB/41.49MB 55f2b468da67 Extracting [===========================================> ] 225.6MB/257.9MB eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 384497dbce3b Extracting [=========> ] 12.26MB/63.48MB 384497dbce3b Extracting [==========> ] 13.37MB/63.48MB 90dd78f85976 Extracting [===============================> ] 25.99MB/41.49MB eabd8714fec9 Extracting [============================================> ] 330.9MB/375MB 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB e73cb4a42719 Extracting [=========================> ] 55.15MB/109.1MB 384497dbce3b Extracting [============> ] 16.15MB/63.48MB 90dd78f85976 Extracting [=======================================> ] 32.8MB/41.49MB e73cb4a42719 Extracting [===========================> ] 59.05MB/109.1MB 90dd78f85976 Extracting [=============================================> ] 37.49MB/41.49MB eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 55f2b468da67 Extracting [============================================> ] 227.8MB/257.9MB 7df673c7455d Pull complete e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 384497dbce3b Extracting [=============> ] 16.71MB/63.48MB 90dd78f85976 Extracting [================================================> ] 40.47MB/41.49MB eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB e73cb4a42719 Extracting [=============================> ] 65.18MB/109.1MB 55f2b468da67 Extracting [============================================> ] 229MB/257.9MB 384497dbce3b Extracting [==============> ] 17.83MB/63.48MB eabd8714fec9 Extracting [============================================> ] 334.8MB/375MB e73cb4a42719 Extracting [===============================> ] 67.96MB/109.1MB 384497dbce3b Extracting [===============> ] 20.05MB/63.48MB 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB eabd8714fec9 Extracting [============================================> ] 336.5MB/375MB e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 384497dbce3b Extracting [==================> ] 23.4MB/63.48MB e73cb4a42719 Extracting [====================================> ] 78.54MB/109.1MB 384497dbce3b Extracting [=====================> ] 27.3MB/63.48MB e73cb4a42719 Extracting [======================================> ] 83.56MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 339.2MB/375MB e73cb4a42719 Extracting [=======================================> ] 85.23MB/109.1MB 384497dbce3b Extracting [========================> ] 30.64MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 55f2b468da67 Extracting [============================================> ] 231.7MB/257.9MB 90dd78f85976 Pull complete e73cb4a42719 Extracting [=======================================> ] 86.9MB/109.1MB 384497dbce3b Extracting [=========================> ] 31.75MB/63.48MB prometheus Pulled e73cb4a42719 Extracting [========================================> ] 88.01MB/109.1MB 384497dbce3b Extracting [=========================> ] 32.31MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB e73cb4a42719 Extracting [=========================================> ] 91.36MB/109.1MB 384497dbce3b Extracting [===========================> ] 34.54MB/63.48MB 55f2b468da67 Extracting [=============================================> ] 236.2MB/257.9MB e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 384497dbce3b Extracting [============================> ] 36.77MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 55f2b468da67 Extracting [==============================================> ] 239.5MB/257.9MB e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB 384497dbce3b Extracting [===============================> ] 39.55MB/63.48MB 55f2b468da67 Extracting [===============================================> ] 244MB/257.9MB e73cb4a42719 Extracting [=============================================> ] 98.6MB/109.1MB 384497dbce3b Extracting [=================================> ] 42.34MB/63.48MB 384497dbce3b Extracting [====================================> ] 46.24MB/63.48MB 384497dbce3b Extracting [=======================================> ] 49.58MB/63.48MB e73cb4a42719 Extracting [=============================================> ] 99.16MB/109.1MB eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 4f4fb700ef54 Extracting [==================================================>] 32B/32B 4f4fb700ef54 Extracting [==================================================>] 32B/32B 384497dbce3b Extracting [========================================> ] 51.25MB/63.48MB e73cb4a42719 Extracting [==============================================> ] 100.8MB/109.1MB 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 384497dbce3b Extracting [==========================================> ] 54.03MB/63.48MB eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB e73cb4a42719 Extracting [==============================================> ] 102.5MB/109.1MB 384497dbce3b Extracting [==========================================> ] 54.59MB/63.48MB 55f2b468da67 Extracting [================================================> ] 248.4MB/257.9MB eabd8714fec9 Extracting [=============================================> ] 343.7MB/375MB 55f2b468da67 Extracting [================================================> ] 251.8MB/257.9MB 384497dbce3b Extracting [==============================================> ] 58.49MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 345.4MB/375MB e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 55f2b468da67 Extracting [=================================================> ] 253.5MB/257.9MB e73cb4a42719 Extracting [===============================================> ] 104.2MB/109.1MB 384497dbce3b Extracting [==============================================> ] 59.05MB/63.48MB 55f2b468da67 Extracting [=================================================> ] 256.8MB/257.9MB 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 4f4fb700ef54 Pull complete eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB e73cb4a42719 Extracting [================================================> ] 105.8MB/109.1MB eabd8714fec9 Extracting [==============================================> ] 346.5MB/375MB 384497dbce3b Extracting [==============================================> ] 59.6MB/63.48MB 384497dbce3b Extracting [=================================================> ] 62.39MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 348.7MB/375MB eabd8714fec9 Extracting [==============================================> ] 350.9MB/375MB eabd8714fec9 Extracting [==============================================> ] 351.5MB/375MB 384497dbce3b Extracting [=================================================> ] 62.95MB/63.48MB eabd8714fec9 Extracting [==============================================> ] 352.1MB/375MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB 384497dbce3b Extracting [==================================================>] 63.48MB/63.48MB e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB eabd8714fec9 Extracting [===============================================> ] 354.3MB/375MB e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB 55f2b468da67 Pull complete e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB eabd8714fec9 Extracting [================================================> ] 362.6MB/375MB eabd8714fec9 Extracting [=================================================> ] 368.8MB/375MB eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB eabd8714fec9 Extracting [==================================================>] 375MB/375MB opa-pdp Pulled 82bfc142787e Extracting [> ] 98.3kB/8.613MB 82bfc142787e Extracting [=================================> ] 5.702MB/8.613MB 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 384497dbce3b Pull complete 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB 055b9255fa03 Extracting [==================================================>] 11.92kB/11.92kB e73cb4a42719 Pull complete eabd8714fec9 Pull complete 82bfc142787e Pull complete a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 45fd2fec8a19 Pull complete 055b9255fa03 Pull complete a83b68436f09 Pull complete 46baca71a4ef Pull complete 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 787d6bee9571 Extracting [==================================================>] 127B/127B 787d6bee9571 Extracting [==================================================>] 127B/127B b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b176d7edde70 Extracting [==================================================>] 1.227kB/1.227kB b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 8f10199ed94b Extracting [============> ] 2.163MB/8.768MB b0e0ef7895f4 Extracting [===========> ] 8.651MB/37.01MB 787d6bee9571 Pull complete b176d7edde70 Pull complete 13ff0988aaea Extracting [==================================================>] 167B/167B 13ff0988aaea Extracting [==================================================>] 167B/167B 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB grafana Pulled 8f10199ed94b Pull complete f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB b0e0ef7895f4 Extracting [===========================> ] 20.45MB/37.01MB 13ff0988aaea Pull complete 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB f963a77d2726 Pull complete b0e0ef7895f4 Extracting [=================================================> ] 36.96MB/37.01MB b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 4b82842ab819 Pull complete b0e0ef7895f4 Pull complete c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 7e568a0dc8fb Extracting [==================================================>] 184B/184B 7e568a0dc8fb Extracting [==================================================>] 184B/184B f3a82e9f1761 Extracting [=================> ] 15.14MB/44.41MB 7e568a0dc8fb Pull complete c0c90eeb8aca Pull complete 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 5cfb27c10ea5 Extracting [==================================================>] 852B/852B postgres Pulled f3a82e9f1761 Extracting [================================> ] 28.9MB/44.41MB 5cfb27c10ea5 Pull complete 40a5eed61bb0 Extracting [==================================================>] 98B/98B 40a5eed61bb0 Extracting [==================================================>] 98B/98B f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB f3a82e9f1761 Pull complete 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 40a5eed61bb0 Pull complete e040ea11fa10 Extracting [==================================================>] 173B/173B e040ea11fa10 Extracting [==================================================>] 173B/173B 79161a3f5362 Pull complete 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB e040ea11fa10 Pull complete 9c266ba63f51 Pull complete 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 2e8a7df9c2ee Pull complete 10f05dd8b1db Extracting [==================================================>] 98B/98B 10f05dd8b1db Extracting [==================================================>] 98B/98B 09d5a3f70313 Extracting [======> ] 13.93MB/109.2MB 09d5a3f70313 Extracting [==============> ] 30.64MB/109.2MB 10f05dd8b1db Pull complete 41dac8b43ba6 Extracting [==================================================>] 171B/171B 41dac8b43ba6 Extracting [==================================================>] 171B/171B 09d5a3f70313 Extracting [=======================> ] 50.69MB/109.2MB 41dac8b43ba6 Pull complete 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 09d5a3f70313 Extracting [===============================> ] 67.96MB/109.2MB 71a9f6a9ab4d Pull complete 09d5a3f70313 Extracting [======================================> ] 83MB/109.2MB da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 09d5a3f70313 Extracting [=============================================> ] 98.6MB/109.2MB da3ed5db7103 Extracting [=====> ] 13.93MB/127.4MB 09d5a3f70313 Extracting [================================================> ] 106.4MB/109.2MB da3ed5db7103 Extracting [===========> ] 28.41MB/127.4MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 09d5a3f70313 Pull complete 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB da3ed5db7103 Extracting [===============> ] 38.99MB/127.4MB da3ed5db7103 Extracting [======================> ] 56.82MB/127.4MB 356f5c2c843b Pull complete kafka Pulled da3ed5db7103 Extracting [=============================> ] 75.76MB/127.4MB da3ed5db7103 Extracting [======================================> ] 98.6MB/127.4MB da3ed5db7103 Extracting [=============================================> ] 115.9MB/127.4MB da3ed5db7103 Extracting [================================================> ] 123.1MB/127.4MB da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB da3ed5db7103 Pull complete c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB c955f6e31a04 Pull complete zookeeper Pulled Network compose_default Creating Network compose_default Created Container prometheus Creating Container postgres Creating Container zookeeper Creating Container prometheus Created Container grafana Creating Container postgres Created Container policy-db-migrator Creating Container zookeeper Created Container kafka Creating Container policy-db-migrator Created Container policy-api Creating Container grafana Created Container kafka Created Container policy-api Created Container policy-pap Creating Container policy-pap Created Container policy-opa-pdp Creating Container policy-opa-pdp Created Container zookeeper Starting Container prometheus Starting Container postgres Starting Container zookeeper Started Container kafka Starting Container kafka Started Container postgres Started Container policy-db-migrator Starting Container policy-db-migrator Started Container policy-api Starting Container prometheus Started Container grafana Starting Container policy-api Started Container policy-pap Starting Container policy-pap Started Container policy-opa-pdp Starting Container policy-opa-pdp Started Container grafana Started Prometheus server: http://localhost:30259 Grafana server: http://localhost:30269 Waiting 3 minutes for OPA-PDP to start... Checking if REST port 30003 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Checking if REST port 30012 is open on localhost ... IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/csit/resources/tests/models'... Building robot framework docker image sha256:afbc8e811338be52e8e793ff7bd1e20da001e6cedaac667ffcf9841b8746ba8c top - 11:50:34 up 6 min, 0 users, load average: 1.14, 1.15, 0.60 Tasks: 219 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.7 us, 2.2 sy, 0.0 ni, 84.4 id, 3.6 wa, 0.0 hi, 0.1 si, 0.1 st total used free shared buff/cache available Mem: 31G 2.3G 21G 28M 7.3G 28G Swap: 1.0G 0B 1.0G IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS baf39881ae3a policy-opa-pdp 0.30% 12.92MiB / 31.41GiB 0.04% 82.1kB / 79.3kB 0B / 0B 20 68ccd4bd593c policy-pap 0.68% 484.1MiB / 31.41GiB 1.50% 2.21MB / 1.23MB 0B / 139MB 69 69516802d014 policy-api 0.14% 399.4MiB / 31.41GiB 1.24% 1.15MB / 1.05MB 0B / 0B 60 60e86e26928d kafka 2.31% 393.4MiB / 31.41GiB 1.22% 310kB / 292kB 8.19kB / 692kB 83 3b716bb711b4 grafana 0.22% 117.6MiB / 31.41GiB 0.37% 19.1MB / 181kB 0B / 31.7MB 20 dd5b834b7b6d zookeeper 0.08% 84.77MiB / 31.41GiB 0.26% 56.9kB / 51.4kB 229kB / 426kB 62 0ecb3f986312 postgres 0.02% 86.42MiB / 31.41GiB 0.27% 2.55MB / 3.73MB 0B / 159MB 26 2a60ac360ea8 prometheus 0.19% 21.11MiB / 31.41GiB 0.07% 204kB / 10.2kB 0B / 0B 12 Container policy-csit Creating Container policy-csit Created Attaching to policy-csit policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-csit exited with code 0 IMAGE NAMES STATUS nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes Shut down started! Collecting logs from docker compose containers... grafana | logger=settings t=2025-06-16T11:46:52.876041381Z level=info msg="Starting Grafana" version=12.0.1+security-01 commit=ff20b06681749873999bb0a8e365f24fddaee33f branch=HEAD compiled=2025-06-16T11:46:52Z grafana | logger=settings t=2025-06-16T11:46:52.876433068Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2025-06-16T11:46:52.876481518Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2025-06-16T11:46:52.876505899Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2025-06-16T11:46:52.87658136Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2025-06-16T11:46:52.876621051Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T11:46:52.876681842Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T11:46:52.876706662Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2025-06-16T11:46:52.876769163Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2025-06-16T11:46:52.876798584Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2025-06-16T11:46:52.876881835Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2025-06-16T11:46:52.876945116Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2025-06-16T11:46:52.877054448Z level=info msg=Target target=[all] grafana | logger=settings t=2025-06-16T11:46:52.877101708Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2025-06-16T11:46:52.87716604Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2025-06-16T11:46:52.877249781Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2025-06-16T11:46:52.877368943Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2025-06-16T11:46:52.877453644Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2025-06-16T11:46:52.877582626Z level=info msg="App mode production" grafana | logger=featuremgmt t=2025-06-16T11:46:52.878019494Z level=info msg=FeatureToggles alertRuleRestore=true alertingRuleVersionHistoryRestore=true prometheusAzureOverrideAudience=true logsPanelControls=true alertingNotificationsStepMode=true influxdbBackendMigration=true correlations=true dashgpt=true newDashboardSharingComponent=true promQLScope=true transformationsRedesign=true alertingInsights=true pluginsDetailsRightPanel=true useSessionStorageForRedirection=true alertingRuleRecoverDeleted=true dataplaneFrontendFallback=true logRowsPopoverMenu=true azureMonitorEnableUserAuth=true recoveryThreshold=true cloudWatchRoundUpEndTime=true dashboardSceneSolo=true unifiedRequestLog=true grafanaconThemes=true lokiStructuredMetadata=true addFieldFromCalculationStatFunctions=true logsContextDatasourceUi=true pinNavItems=true kubernetesClientDashboardsFolders=true nestedFolders=true tlsMemcached=true lokiQueryHints=true panelMonitoring=true cloudWatchNewLabelParsing=true alertingApiServer=true alertingRulePermanentlyDelete=true cloudWatchCrossAccountQuerying=true alertingSimplifiedRouting=true newFiltersUI=true onPremToCloudMigrations=true dashboardScene=true dashboardSceneForViewers=true ssoSettingsApi=true newPDFRendering=true formatString=true kubernetesPlaylists=true ssoSettingsSAML=true angularDeprecationUI=true logsInfiniteScrolling=true lokiLabelNamesQueryApi=true awsAsyncQueryCaching=true reportingUseRawTimeRange=true recordedQueriesMulti=true groupToNestedTableTransformation=true unifiedStorageSearchPermissionFiltering=true alertingUIOptimizeReducer=true prometheusUsesCombobox=true publicDashboardsScene=true logsExploreTableVisualisation=true externalCorePlugins=true failWrongDSUID=true preinstallAutoUpdate=true alertingQueryAndExpressionsStepMode=true lokiQuerySplitting=true annotationPermissionUpdate=true azureMonitorPrometheusExemplars=true grafana | logger=sqlstore t=2025-06-16T11:46:52.878198927Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2025-06-16T11:46:52.878286869Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2025-06-16T11:46:52.879863115Z level=info msg="Locking database" grafana | logger=migrator t=2025-06-16T11:46:52.879914645Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2025-06-16T11:46:52.880609377Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2025-06-16T11:46:52.881648825Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.038878ms grafana | logger=migrator t=2025-06-16T11:46:52.916257082Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2025-06-16T11:46:52.917808758Z level=info msg="Migration successfully executed" id="create user table" duration=1.550186ms grafana | logger=migrator t=2025-06-16T11:46:52.923700276Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2025-06-16T11:46:52.924577621Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=877.215µs grafana | logger=migrator t=2025-06-16T11:46:52.927778934Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2025-06-16T11:46:52.9286779Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=898.546µs grafana | logger=migrator t=2025-06-16T11:46:52.932198918Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2025-06-16T11:46:52.933015872Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=816.574µs grafana | logger=migrator t=2025-06-16T11:46:52.939123883Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2025-06-16T11:46:52.940005719Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=881.446µs grafana | logger=migrator t=2025-06-16T11:46:52.943484017Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2025-06-16T11:46:52.946006789Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.520462ms grafana | logger=migrator t=2025-06-16T11:46:52.949021839Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2025-06-16T11:46:52.949970765Z level=info msg="Migration successfully executed" id="create user table v2" duration=947.965µs grafana | logger=migrator t=2025-06-16T11:46:52.955002319Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2025-06-16T11:46:52.955771431Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=768.692µs grafana | logger=migrator t=2025-06-16T11:46:52.960326308Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2025-06-16T11:46:52.96107072Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=744.162µs grafana | logger=migrator t=2025-06-16T11:46:52.965712518Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:52.966203716Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=490.788µs grafana | logger=migrator t=2025-06-16T11:46:52.970121651Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2025-06-16T11:46:52.970872744Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=750.273µs grafana | logger=migrator t=2025-06-16T11:46:52.97546462Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2025-06-16T11:46:52.976600429Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.134759ms grafana | logger=migrator t=2025-06-16T11:46:52.979910204Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2025-06-16T11:46:52.980060857Z level=info msg="Migration successfully executed" id="Update user table charset" duration=151.073µs grafana | logger=migrator t=2025-06-16T11:46:52.984059693Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2025-06-16T11:46:52.985222483Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.16493ms grafana | logger=migrator t=2025-06-16T11:46:52.988370526Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2025-06-16T11:46:52.988732862Z level=info msg="Migration successfully executed" id="Add missing user data" duration=361.986µs grafana | logger=migrator t=2025-06-16T11:46:52.993607053Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2025-06-16T11:46:52.994890294Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.282561ms grafana | logger=migrator t=2025-06-16T11:46:53.000255684Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2025-06-16T11:46:53.00121316Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=957.076µs grafana | logger=migrator t=2025-06-16T11:46:53.0041968Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2025-06-16T11:46:53.005070214Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=873.004µs grafana | logger=migrator t=2025-06-16T11:46:53.037135709Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2025-06-16T11:46:53.049728729Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.58961ms grafana | logger=migrator t=2025-06-16T11:46:53.052889812Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2025-06-16T11:46:53.054190723Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.300601ms grafana | logger=migrator t=2025-06-16T11:46:53.058840361Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2025-06-16T11:46:53.059145576Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=304.585µs grafana | logger=migrator t=2025-06-16T11:46:53.063686272Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2025-06-16T11:46:53.064587417Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=899.825µs grafana | logger=migrator t=2025-06-16T11:46:53.068083815Z level=info msg="Executing migration" id="Add is_provisioned column to user" grafana | logger=migrator t=2025-06-16T11:46:53.069958077Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=1.873382ms grafana | logger=migrator t=2025-06-16T11:46:53.075842675Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2025-06-16T11:46:53.076387974Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=551.489µs grafana | logger=migrator t=2025-06-16T11:46:53.07975561Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" grafana | logger=migrator t=2025-06-16T11:46:53.08037764Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=621.73µs grafana | logger=migrator t=2025-06-16T11:46:53.083580214Z level=info msg="Executing migration" id="update login and email fields to lowercase" grafana | logger=migrator t=2025-06-16T11:46:53.084034681Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=454.077µs grafana | logger=migrator t=2025-06-16T11:46:53.089505722Z level=info msg="Executing migration" id="update login and email fields to lowercase2" grafana | logger=migrator t=2025-06-16T11:46:53.089867278Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=359.216µs grafana | logger=migrator t=2025-06-16T11:46:53.093220794Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2025-06-16T11:46:53.09414653Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=925.376µs grafana | logger=migrator t=2025-06-16T11:46:53.09716428Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2025-06-16T11:46:53.097926113Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=761.603µs grafana | logger=migrator t=2025-06-16T11:46:53.101113586Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2025-06-16T11:46:53.101915499Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=802.163µs grafana | logger=migrator t=2025-06-16T11:46:53.107624924Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2025-06-16T11:46:53.108367997Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=743.733µs grafana | logger=migrator t=2025-06-16T11:46:53.111335557Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2025-06-16T11:46:53.112013888Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=677.481µs grafana | logger=migrator t=2025-06-16T11:46:53.115949733Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2025-06-16T11:46:53.115977774Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=28.461µs grafana | logger=migrator t=2025-06-16T11:46:53.12169352Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2025-06-16T11:46:53.122391481Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=697.821µs grafana | logger=migrator t=2025-06-16T11:46:53.125424212Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.126106673Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=681.761µs grafana | logger=migrator t=2025-06-16T11:46:53.129154324Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2025-06-16T11:46:53.129819885Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=665.311µs grafana | logger=migrator t=2025-06-16T11:46:53.162846956Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2025-06-16T11:46:53.163976355Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.130949ms grafana | logger=migrator t=2025-06-16T11:46:53.167590245Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:53.173074567Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.482882ms grafana | logger=migrator t=2025-06-16T11:46:53.176449433Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2025-06-16T11:46:53.177311877Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=862.174µs grafana | logger=migrator t=2025-06-16T11:46:53.181719591Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2025-06-16T11:46:53.182500744Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=780.813µs grafana | logger=migrator t=2025-06-16T11:46:53.185601075Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2025-06-16T11:46:53.186712064Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.110479ms grafana | logger=migrator t=2025-06-16T11:46:53.189985479Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2025-06-16T11:46:53.190743421Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=761.112µs grafana | logger=migrator t=2025-06-16T11:46:53.19603181Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2025-06-16T11:46:53.196802612Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=770.532µs grafana | logger=migrator t=2025-06-16T11:46:53.200126178Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:53.200509885Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=383.607µs grafana | logger=migrator t=2025-06-16T11:46:53.203462414Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:53.204037993Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=575.049µs grafana | logger=migrator t=2025-06-16T11:46:53.206734458Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2025-06-16T11:46:53.207117335Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=382.687µs grafana | logger=migrator t=2025-06-16T11:46:53.212635377Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2025-06-16T11:46:53.213825896Z level=info msg="Migration successfully executed" id="create star table" duration=1.189539ms grafana | logger=migrator t=2025-06-16T11:46:53.217543909Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2025-06-16T11:46:53.218380733Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=836.984µs grafana | logger=migrator t=2025-06-16T11:46:53.220979096Z level=info msg="Executing migration" id="Add column dashboard_uid in star" grafana | logger=migrator t=2025-06-16T11:46:53.22242499Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.445364ms grafana | logger=migrator t=2025-06-16T11:46:53.227755968Z level=info msg="Executing migration" id="Add column org_id in star" grafana | logger=migrator t=2025-06-16T11:46:53.229754122Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.995634ms grafana | logger=migrator t=2025-06-16T11:46:53.232986656Z level=info msg="Executing migration" id="Add column updated in star" grafana | logger=migrator t=2025-06-16T11:46:53.235291124Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=2.304408ms grafana | logger=migrator t=2025-06-16T11:46:53.238507298Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" grafana | logger=migrator t=2025-06-16T11:46:53.239339992Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=832.224µs grafana | logger=migrator t=2025-06-16T11:46:53.243264818Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2025-06-16T11:46:53.24400232Z level=info msg="Migration successfully executed" id="create org table v1" duration=734.592µs grafana | logger=migrator t=2025-06-16T11:46:53.249205647Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2025-06-16T11:46:53.25001757Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=811.593µs grafana | logger=migrator t=2025-06-16T11:46:53.253782843Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2025-06-16T11:46:53.254897181Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.114188ms grafana | logger=migrator t=2025-06-16T11:46:53.258104905Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.259312796Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.20406ms grafana | logger=migrator t=2025-06-16T11:46:53.294607924Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.296807791Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=2.202057ms grafana | logger=migrator t=2025-06-16T11:46:53.301715123Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.303573783Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.86029ms grafana | logger=migrator t=2025-06-16T11:46:53.307130153Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2025-06-16T11:46:53.307263175Z level=info msg="Migration successfully executed" id="Update org table charset" duration=133.332µs grafana | logger=migrator t=2025-06-16T11:46:53.309627515Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2025-06-16T11:46:53.309758777Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=131.522µs grafana | logger=migrator t=2025-06-16T11:46:53.312984721Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2025-06-16T11:46:53.313345407Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=355.385µs grafana | logger=migrator t=2025-06-16T11:46:53.318035175Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2025-06-16T11:46:53.319385207Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.349732ms grafana | logger=migrator t=2025-06-16T11:46:53.323887253Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2025-06-16T11:46:53.324843338Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=956.035µs grafana | logger=migrator t=2025-06-16T11:46:53.328077462Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2025-06-16T11:46:53.328893265Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=815.093µs grafana | logger=migrator t=2025-06-16T11:46:53.332409065Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2025-06-16T11:46:53.333446272Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.038117ms grafana | logger=migrator t=2025-06-16T11:46:53.336641675Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2025-06-16T11:46:53.337729714Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.087909ms grafana | logger=migrator t=2025-06-16T11:46:53.342602204Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2025-06-16T11:46:53.343447549Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=845.115µs grafana | logger=migrator t=2025-06-16T11:46:53.346609202Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2025-06-16T11:46:53.352256936Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.647554ms grafana | logger=migrator t=2025-06-16T11:46:53.356530217Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2025-06-16T11:46:53.357353011Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=825.114µs grafana | logger=migrator t=2025-06-16T11:46:53.362452076Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2025-06-16T11:46:53.364277356Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.823691ms grafana | logger=migrator t=2025-06-16T11:46:53.367810755Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2025-06-16T11:46:53.369311351Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.499926ms grafana | logger=migrator t=2025-06-16T11:46:53.37287401Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:53.373278996Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=403.896µs grafana | logger=migrator t=2025-06-16T11:46:53.378202799Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2025-06-16T11:46:53.379061703Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=858.314µs grafana | logger=migrator t=2025-06-16T11:46:53.383858883Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T11:46:53.383909924Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=53.69µs grafana | logger=migrator t=2025-06-16T11:46:53.38907919Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T11:46:53.391081774Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.001894ms grafana | logger=migrator t=2025-06-16T11:46:53.422544568Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2025-06-16T11:46:53.426507514Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.963646ms grafana | logger=migrator t=2025-06-16T11:46:53.429602876Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.431412236Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.80839ms grafana | logger=migrator t=2025-06-16T11:46:53.434308475Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.435050177Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=741.272µs grafana | logger=migrator t=2025-06-16T11:46:53.439919398Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.441889071Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.968573ms grafana | logger=migrator t=2025-06-16T11:46:53.445632874Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.446405726Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=772.172µs grafana | logger=migrator t=2025-06-16T11:46:53.452618601Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T11:46:53.454598393Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.978272ms grafana | logger=migrator t=2025-06-16T11:46:53.459005306Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2025-06-16T11:46:53.459048977Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=44.841µs grafana | logger=migrator t=2025-06-16T11:46:53.461452697Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2025-06-16T11:46:53.461481418Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.091µs grafana | logger=migrator t=2025-06-16T11:46:53.465085938Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.466620444Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.533476ms grafana | logger=migrator t=2025-06-16T11:46:53.475785696Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.47898359Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.195214ms grafana | logger=migrator t=2025-06-16T11:46:53.484522862Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.48740051Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.876558ms grafana | logger=migrator t=2025-06-16T11:46:53.490555273Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.492332083Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.77631ms grafana | logger=migrator t=2025-06-16T11:46:53.495125749Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.495311762Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=188.633µs grafana | logger=migrator t=2025-06-16T11:46:53.501345013Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:53.502206477Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=861.064µs grafana | logger=migrator t=2025-06-16T11:46:53.506260525Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2025-06-16T11:46:53.507371483Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.110418ms grafana | logger=migrator t=2025-06-16T11:46:53.510819081Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2025-06-16T11:46:53.510846191Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.5µs grafana | logger=migrator t=2025-06-16T11:46:53.515779814Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2025-06-16T11:46:53.51672795Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=947.716µs grafana | logger=migrator t=2025-06-16T11:46:53.519868792Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2025-06-16T11:46:53.52094271Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.073208ms grafana | logger=migrator t=2025-06-16T11:46:53.552655408Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:53.560923746Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.272308ms grafana | logger=migrator t=2025-06-16T11:46:53.566064622Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2025-06-16T11:46:53.566721343Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=656.331µs grafana | logger=migrator t=2025-06-16T11:46:53.5695581Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2025-06-16T11:46:53.570229482Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=670.252µs grafana | logger=migrator t=2025-06-16T11:46:53.574272089Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2025-06-16T11:46:53.57494245Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=670.241µs grafana | logger=migrator t=2025-06-16T11:46:53.581333018Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:53.581907127Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=576.069µs grafana | logger=migrator t=2025-06-16T11:46:53.584998589Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:53.585776191Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=773.932µs grafana | logger=migrator t=2025-06-16T11:46:53.588742921Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2025-06-16T11:46:53.591042399Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.298628ms grafana | logger=migrator t=2025-06-16T11:46:53.593955028Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2025-06-16T11:46:53.594800102Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=844.704µs grafana | logger=migrator t=2025-06-16T11:46:53.600630949Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2025-06-16T11:46:53.600896844Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=265.495µs grafana | logger=migrator t=2025-06-16T11:46:53.605025782Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2025-06-16T11:46:53.605440899Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=413.857µs grafana | logger=migrator t=2025-06-16T11:46:53.609062659Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2025-06-16T11:46:53.610187479Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.125809ms grafana | logger=migrator t=2025-06-16T11:46:53.614929538Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.617130664Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.200306ms grafana | logger=migrator t=2025-06-16T11:46:53.620691944Z level=info msg="Executing migration" id="Add deleted for dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.624275294Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=3.58283ms grafana | logger=migrator t=2025-06-16T11:46:53.627724681Z level=info msg="Executing migration" id="Add index for deleted" grafana | logger=migrator t=2025-06-16T11:46:53.628564935Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=840.114µs grafana | logger=migrator t=2025-06-16T11:46:53.63364956Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" grafana | logger=migrator t=2025-06-16T11:46:53.63666203Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=3.01168ms grafana | logger=migrator t=2025-06-16T11:46:53.640130779Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" grafana | logger=migrator t=2025-06-16T11:46:53.642506857Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=2.375269ms grafana | logger=migrator t=2025-06-16T11:46:53.645502207Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" grafana | logger=migrator t=2025-06-16T11:46:53.646044417Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=541.55µs grafana | logger=migrator t=2025-06-16T11:46:53.65702681Z level=info msg="Executing migration" id="Add apiVersion for dashboard" grafana | logger=migrator t=2025-06-16T11:46:53.660882094Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.854354ms grafana | logger=migrator t=2025-06-16T11:46:53.666074041Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" grafana | logger=migrator t=2025-06-16T11:46:53.666962866Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=888.455µs grafana | logger=migrator t=2025-06-16T11:46:53.670353942Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" grafana | logger=migrator t=2025-06-16T11:46:53.670870442Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=515.669µs grafana | logger=migrator t=2025-06-16T11:46:53.674447671Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2025-06-16T11:46:53.676010996Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.563345ms grafana | logger=migrator t=2025-06-16T11:46:53.680818857Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2025-06-16T11:46:53.682295562Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.478275ms grafana | logger=migrator t=2025-06-16T11:46:53.686289348Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2025-06-16T11:46:53.687481057Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.191499ms grafana | logger=migrator t=2025-06-16T11:46:53.691083068Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.691881111Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=797.743µs grafana | logger=migrator t=2025-06-16T11:46:53.696252995Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2025-06-16T11:46:53.697013457Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=756.912µs grafana | logger=migrator t=2025-06-16T11:46:53.700228801Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2025-06-16T11:46:53.70680787Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.578559ms grafana | logger=migrator t=2025-06-16T11:46:53.711345026Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2025-06-16T11:46:53.712154759Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=809.513µs grafana | logger=migrator t=2025-06-16T11:46:53.717769573Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2025-06-16T11:46:53.719101846Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.332793ms grafana | logger=migrator t=2025-06-16T11:46:53.724044058Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2025-06-16T11:46:53.724634188Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=591.07µs grafana | logger=migrator t=2025-06-16T11:46:53.727553457Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2025-06-16T11:46:53.727962053Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=408.366µs grafana | logger=migrator t=2025-06-16T11:46:53.735942827Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2025-06-16T11:46:53.738103123Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.161716ms grafana | logger=migrator t=2025-06-16T11:46:53.74335541Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2025-06-16T11:46:53.746193647Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.837457ms grafana | logger=migrator t=2025-06-16T11:46:53.74932406Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2025-06-16T11:46:53.749362151Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=38.691µs grafana | logger=migrator t=2025-06-16T11:46:53.78110967Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2025-06-16T11:46:53.781440325Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=330.795µs grafana | logger=migrator t=2025-06-16T11:46:53.786069253Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2025-06-16T11:46:53.790221752Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.151359ms grafana | logger=migrator t=2025-06-16T11:46:53.793174891Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2025-06-16T11:46:53.793412455Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=237.014µs grafana | logger=migrator t=2025-06-16T11:46:53.796313183Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2025-06-16T11:46:53.796577318Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=260.214µs grafana | logger=migrator t=2025-06-16T11:46:53.804742774Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2025-06-16T11:46:53.809024596Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.283402ms grafana | logger=migrator t=2025-06-16T11:46:53.81230807Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2025-06-16T11:46:53.812554964Z level=info msg="Migration successfully executed" id="Update uid value" duration=246.724µs grafana | logger=migrator t=2025-06-16T11:46:53.81528821Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:53.816121274Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=832.624µs grafana | logger=migrator t=2025-06-16T11:46:53.820479247Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2025-06-16T11:46:53.82130415Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=828.093µs grafana | logger=migrator t=2025-06-16T11:46:53.825659374Z level=info msg="Executing migration" id="Add is_prunable column" grafana | logger=migrator t=2025-06-16T11:46:53.828188636Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=2.532813ms grafana | logger=migrator t=2025-06-16T11:46:53.830957532Z level=info msg="Executing migration" id="Add api_version column" grafana | logger=migrator t=2025-06-16T11:46:53.833515294Z level=info msg="Migration successfully executed" id="Add api_version column" duration=2.557002ms grafana | logger=migrator t=2025-06-16T11:46:53.838748101Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" grafana | logger=migrator t=2025-06-16T11:46:53.838766061Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=18.44µs grafana | logger=migrator t=2025-06-16T11:46:53.840833976Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2025-06-16T11:46:53.84166442Z level=info msg="Migration successfully executed" id="create api_key table" duration=829.954µs grafana | logger=migrator t=2025-06-16T11:46:53.8446284Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2025-06-16T11:46:53.845568665Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=939.905µs grafana | logger=migrator t=2025-06-16T11:46:53.851195479Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2025-06-16T11:46:53.852794965Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.600936ms grafana | logger=migrator t=2025-06-16T11:46:53.860020396Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2025-06-16T11:46:53.860901961Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=883.435µs grafana | logger=migrator t=2025-06-16T11:46:53.865678141Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2025-06-16T11:46:53.866509574Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=830.353µs grafana | logger=migrator t=2025-06-16T11:46:53.893745239Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2025-06-16T11:46:53.89444842Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=703.661µs grafana | logger=migrator t=2025-06-16T11:46:53.89857444Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2025-06-16T11:46:53.899456054Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=881.464µs grafana | logger=migrator t=2025-06-16T11:46:53.903769156Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2025-06-16T11:46:53.908864141Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.094895ms grafana | logger=migrator t=2025-06-16T11:46:53.911472854Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2025-06-16T11:46:53.912006744Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=531.2µs grafana | logger=migrator t=2025-06-16T11:46:53.913996416Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2025-06-16T11:46:53.914558756Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=562.25µs grafana | logger=migrator t=2025-06-16T11:46:53.920217101Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2025-06-16T11:46:53.920828271Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=612.89µs grafana | logger=migrator t=2025-06-16T11:46:53.923651738Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2025-06-16T11:46:53.924256518Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=604.53µs grafana | logger=migrator t=2025-06-16T11:46:53.927025564Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:53.927294928Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=268.944µs grafana | logger=migrator t=2025-06-16T11:46:53.932411294Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2025-06-16T11:46:53.933868398Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.451894ms grafana | logger=migrator t=2025-06-16T11:46:53.937208774Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2025-06-16T11:46:53.937287526Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=78.822µs grafana | logger=migrator t=2025-06-16T11:46:53.940448508Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2025-06-16T11:46:53.942431621Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.982993ms grafana | logger=migrator t=2025-06-16T11:46:53.946454708Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2025-06-16T11:46:53.9483081Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.852982ms grafana | logger=migrator t=2025-06-16T11:46:53.952835515Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2025-06-16T11:46:53.953023608Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=187.994µs grafana | logger=migrator t=2025-06-16T11:46:53.955885575Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2025-06-16T11:46:53.957865929Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.980014ms grafana | logger=migrator t=2025-06-16T11:46:53.961100262Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2025-06-16T11:46:53.963016395Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.915823ms grafana | logger=migrator t=2025-06-16T11:46:53.96694407Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2025-06-16T11:46:53.967705563Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=761.223µs grafana | logger=migrator t=2025-06-16T11:46:53.971004928Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2025-06-16T11:46:53.971450266Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=445.217µs grafana | logger=migrator t=2025-06-16T11:46:53.976051102Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2025-06-16T11:46:53.976765724Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=714.462µs grafana | logger=migrator t=2025-06-16T11:46:53.980806431Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2025-06-16T11:46:53.981628474Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=821.853µs grafana | logger=migrator t=2025-06-16T11:46:53.98489433Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2025-06-16T11:46:53.985824355Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=925.525µs grafana | logger=migrator t=2025-06-16T11:46:53.989012728Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2025-06-16T11:46:53.989936753Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=921.585µs grafana | logger=migrator t=2025-06-16T11:46:54.0059426Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2025-06-16T11:46:54.006027322Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=67.731µs grafana | logger=migrator t=2025-06-16T11:46:54.009490909Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2025-06-16T11:46:54.009592641Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=101.302µs grafana | logger=migrator t=2025-06-16T11:46:54.012855396Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2025-06-16T11:46:54.015787085Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.931019ms grafana | logger=migrator t=2025-06-16T11:46:54.020709827Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2025-06-16T11:46:54.023553384Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.842848ms grafana | logger=migrator t=2025-06-16T11:46:54.028428856Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2025-06-16T11:46:54.028530277Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=102.411µs grafana | logger=migrator t=2025-06-16T11:46:54.031844032Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2025-06-16T11:46:54.032669896Z level=info msg="Migration successfully executed" id="create quota table v1" duration=822.784µs grafana | logger=migrator t=2025-06-16T11:46:54.036183715Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2025-06-16T11:46:54.037745841Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.562306ms grafana | logger=migrator t=2025-06-16T11:46:54.043196211Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2025-06-16T11:46:54.043321614Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=126.393µs grafana | logger=migrator t=2025-06-16T11:46:54.046035879Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2025-06-16T11:46:54.046974375Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=935.726µs grafana | logger=migrator t=2025-06-16T11:46:54.050625786Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2025-06-16T11:46:54.05151829Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=892.224µs grafana | logger=migrator t=2025-06-16T11:46:54.056946051Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2025-06-16T11:46:54.060237287Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.290476ms grafana | logger=migrator t=2025-06-16T11:46:54.065940022Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2025-06-16T11:46:54.066060574Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=120.812µs grafana | logger=migrator t=2025-06-16T11:46:54.069213406Z level=info msg="Executing migration" id="update NULL org_id to 1" grafana | logger=migrator t=2025-06-16T11:46:54.069684953Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=467.987µs grafana | logger=migrator t=2025-06-16T11:46:54.074035085Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" grafana | logger=migrator t=2025-06-16T11:46:54.085895474Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=11.852239ms grafana | logger=migrator t=2025-06-16T11:46:54.093057533Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2025-06-16T11:46:54.093924708Z level=info msg="Migration successfully executed" id="create session table" duration=866.925µs grafana | logger=migrator t=2025-06-16T11:46:54.097181162Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2025-06-16T11:46:54.097325785Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=145.683µs grafana | logger=migrator t=2025-06-16T11:46:54.100558829Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2025-06-16T11:46:54.100682261Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=121.422µs grafana | logger=migrator t=2025-06-16T11:46:54.135329188Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2025-06-16T11:46:54.136133772Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=807.114µs grafana | logger=migrator t=2025-06-16T11:46:54.138836257Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2025-06-16T11:46:54.139416526Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=579.869µs grafana | logger=migrator t=2025-06-16T11:46:54.141941589Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2025-06-16T11:46:54.141963359Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.731µs grafana | logger=migrator t=2025-06-16T11:46:54.144651904Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2025-06-16T11:46:54.144673214Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.09µs grafana | logger=migrator t=2025-06-16T11:46:54.149397532Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2025-06-16T11:46:54.151801893Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.404131ms grafana | logger=migrator t=2025-06-16T11:46:54.154315055Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2025-06-16T11:46:54.156639684Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.325999ms grafana | logger=migrator t=2025-06-16T11:46:54.15944938Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2025-06-16T11:46:54.159550132Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=100.752µs grafana | logger=migrator t=2025-06-16T11:46:54.164668088Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2025-06-16T11:46:54.164764169Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=96.431µs grafana | logger=migrator t=2025-06-16T11:46:54.168036674Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2025-06-16T11:46:54.168793436Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=758.222µs grafana | logger=migrator t=2025-06-16T11:46:54.171511331Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2025-06-16T11:46:54.171531542Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=21µs grafana | logger=migrator t=2025-06-16T11:46:54.175539569Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2025-06-16T11:46:54.178088771Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.549052ms grafana | logger=migrator t=2025-06-16T11:46:54.180746136Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2025-06-16T11:46:54.180947719Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=201.113µs grafana | logger=migrator t=2025-06-16T11:46:54.184562599Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2025-06-16T11:46:54.18702923Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.466311ms grafana | logger=migrator t=2025-06-16T11:46:54.189819417Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2025-06-16T11:46:54.192095135Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.275208ms grafana | logger=migrator t=2025-06-16T11:46:54.196417017Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2025-06-16T11:46:54.196431207Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=14.72µs grafana | logger=migrator t=2025-06-16T11:46:54.199063401Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2025-06-16T11:46:54.199713692Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=650.171µs grafana | logger=migrator t=2025-06-16T11:46:54.203382043Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2025-06-16T11:46:54.204087855Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=705.612µs grafana | logger=migrator t=2025-06-16T11:46:54.209267841Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2025-06-16T11:46:54.210187016Z level=info msg="Migration successfully executed" id="create alert table v1" duration=920.055µs grafana | logger=migrator t=2025-06-16T11:46:54.214156762Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2025-06-16T11:46:54.214907345Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=752.223µs grafana | logger=migrator t=2025-06-16T11:46:54.218017697Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2025-06-16T11:46:54.218863761Z level=info msg="Migration successfully executed" id="add index alert state" duration=847.294µs grafana | logger=migrator t=2025-06-16T11:46:54.222082744Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2025-06-16T11:46:54.223195824Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.11311ms grafana | logger=migrator t=2025-06-16T11:46:54.226144003Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2025-06-16T11:46:54.226934956Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=790.733µs grafana | logger=migrator t=2025-06-16T11:46:54.241043761Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2025-06-16T11:46:54.242105548Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.063247ms grafana | logger=migrator t=2025-06-16T11:46:54.245519076Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2025-06-16T11:46:54.24639255Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=874.084µs grafana | logger=migrator t=2025-06-16T11:46:54.250407806Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2025-06-16T11:46:54.261890139Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=11.479683ms grafana | logger=migrator t=2025-06-16T11:46:54.268384347Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2025-06-16T11:46:54.269088009Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=704.832µs grafana | logger=migrator t=2025-06-16T11:46:54.275122369Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2025-06-16T11:46:54.276138927Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.017308ms grafana | logger=migrator t=2025-06-16T11:46:54.281582847Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:54.281905202Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=322.485µs grafana | logger=migrator t=2025-06-16T11:46:54.284919033Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2025-06-16T11:46:54.285539233Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=619.49µs grafana | logger=migrator t=2025-06-16T11:46:54.290391274Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2025-06-16T11:46:54.291170177Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=778.553µs grafana | logger=migrator t=2025-06-16T11:46:54.295058731Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2025-06-16T11:46:54.299503116Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.442265ms grafana | logger=migrator t=2025-06-16T11:46:54.303240668Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2025-06-16T11:46:54.307658912Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.420384ms grafana | logger=migrator t=2025-06-16T11:46:54.314311053Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2025-06-16T11:46:54.317259441Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.948889ms grafana | logger=migrator t=2025-06-16T11:46:54.320271782Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2025-06-16T11:46:54.324240018Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.965887ms grafana | logger=migrator t=2025-06-16T11:46:54.328407917Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2025-06-16T11:46:54.329402855Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=995.058µs grafana | logger=migrator t=2025-06-16T11:46:54.360685036Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2025-06-16T11:46:54.360733247Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=51.311µs grafana | logger=migrator t=2025-06-16T11:46:54.364356987Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2025-06-16T11:46:54.364397198Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=41.571µs grafana | logger=migrator t=2025-06-16T11:46:54.367991577Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2025-06-16T11:46:54.369196028Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.199901ms grafana | logger=migrator t=2025-06-16T11:46:54.374775021Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T11:46:54.376372807Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.601186ms grafana | logger=migrator t=2025-06-16T11:46:54.379959728Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2025-06-16T11:46:54.381272109Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.315301ms grafana | logger=migrator t=2025-06-16T11:46:54.386559767Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2025-06-16T11:46:54.387649345Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.089208ms grafana | logger=migrator t=2025-06-16T11:46:54.390801918Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2025-06-16T11:46:54.391737424Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=935.166µs grafana | logger=migrator t=2025-06-16T11:46:54.395879372Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2025-06-16T11:46:54.400010682Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.13094ms grafana | logger=migrator t=2025-06-16T11:46:54.403624292Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2025-06-16T11:46:54.40765619Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.031568ms grafana | logger=migrator t=2025-06-16T11:46:54.412218065Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2025-06-16T11:46:54.412399828Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=180.223µs grafana | logger=migrator t=2025-06-16T11:46:54.416781762Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:54.417791708Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.009616ms grafana | logger=migrator t=2025-06-16T11:46:54.421127373Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2025-06-16T11:46:54.422346044Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.217511ms grafana | logger=migrator t=2025-06-16T11:46:54.42630457Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2025-06-16T11:46:54.431584478Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.279998ms grafana | logger=migrator t=2025-06-16T11:46:54.436109374Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2025-06-16T11:46:54.436125954Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=17.42µs grafana | logger=migrator t=2025-06-16T11:46:54.440591308Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2025-06-16T11:46:54.441673747Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.079229ms grafana | logger=migrator t=2025-06-16T11:46:54.445097924Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2025-06-16T11:46:54.445927047Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=828.743µs grafana | logger=migrator t=2025-06-16T11:46:54.450163658Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2025-06-16T11:46:54.45024349Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=79.892µs grafana | logger=migrator t=2025-06-16T11:46:54.453641986Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2025-06-16T11:46:54.45451677Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=874.554µs grafana | logger=migrator t=2025-06-16T11:46:54.457874946Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2025-06-16T11:46:54.459198838Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.318782ms grafana | logger=migrator t=2025-06-16T11:46:54.489149489Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2025-06-16T11:46:54.490535351Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.385722ms grafana | logger=migrator t=2025-06-16T11:46:54.494384516Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2025-06-16T11:46:54.49524573Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=860.904µs grafana | logger=migrator t=2025-06-16T11:46:54.499047813Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2025-06-16T11:46:54.500003489Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=954.926µs grafana | logger=migrator t=2025-06-16T11:46:54.504750748Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2025-06-16T11:46:54.505703694Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=952.436µs grafana | logger=migrator t=2025-06-16T11:46:54.509222583Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2025-06-16T11:46:54.509246393Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24µs grafana | logger=migrator t=2025-06-16T11:46:54.513985073Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.521182802Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.196399ms grafana | logger=migrator t=2025-06-16T11:46:54.52583082Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2025-06-16T11:46:54.526670224Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=839.094µs grafana | logger=migrator t=2025-06-16T11:46:54.529949768Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.534339092Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.388554ms grafana | logger=migrator t=2025-06-16T11:46:54.539190853Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2025-06-16T11:46:54.540093158Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=901.615µs grafana | logger=migrator t=2025-06-16T11:46:54.544495171Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2025-06-16T11:46:54.545510578Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.014798ms grafana | logger=migrator t=2025-06-16T11:46:54.548834973Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2025-06-16T11:46:54.549717068Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=881.815µs grafana | logger=migrator t=2025-06-16T11:46:54.553006153Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2025-06-16T11:46:54.563851123Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=10.845021ms grafana | logger=migrator t=2025-06-16T11:46:54.569283734Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2025-06-16T11:46:54.569984416Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=715.282µs grafana | logger=migrator t=2025-06-16T11:46:54.573299001Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2025-06-16T11:46:54.574341348Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.041747ms grafana | logger=migrator t=2025-06-16T11:46:54.579925602Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2025-06-16T11:46:54.580311728Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=385.306µs grafana | logger=migrator t=2025-06-16T11:46:54.62840313Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2025-06-16T11:46:54.629449168Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.044838ms grafana | logger=migrator t=2025-06-16T11:46:54.633684098Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2025-06-16T11:46:54.634132275Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=446.987µs grafana | logger=migrator t=2025-06-16T11:46:54.637721416Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.641833474Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.111458ms grafana | logger=migrator t=2025-06-16T11:46:54.646780427Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.652038444Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.257217ms grafana | logger=migrator t=2025-06-16T11:46:54.655524182Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.656460728Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=936.106µs grafana | logger=migrator t=2025-06-16T11:46:54.659971167Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.661096075Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.124468ms grafana | logger=migrator t=2025-06-16T11:46:54.665377227Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2025-06-16T11:46:54.665674982Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=297.095µs grafana | logger=migrator t=2025-06-16T11:46:54.668920406Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2025-06-16T11:46:54.673210347Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.289071ms grafana | logger=migrator t=2025-06-16T11:46:54.67759843Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2025-06-16T11:46:54.678755289Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.155949ms grafana | logger=migrator t=2025-06-16T11:46:54.682154977Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2025-06-16T11:46:54.682521963Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=366.386µs grafana | logger=migrator t=2025-06-16T11:46:54.687460584Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2025-06-16T11:46:54.687844072Z level=info msg="Migration successfully executed" id="Move region to single row" duration=382.878µs grafana | logger=migrator t=2025-06-16T11:46:54.691200288Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.69256359Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.362833ms grafana | logger=migrator t=2025-06-16T11:46:54.696238422Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.697686875Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.451583ms grafana | logger=migrator t=2025-06-16T11:46:54.702407274Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.703596574Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.18891ms grafana | logger=migrator t=2025-06-16T11:46:54.707206613Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.708961503Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.75458ms grafana | logger=migrator t=2025-06-16T11:46:54.712843078Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.714237781Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.412004ms grafana | logger=migrator t=2025-06-16T11:46:54.718868069Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2025-06-16T11:46:54.719728453Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=860.004µs grafana | logger=migrator t=2025-06-16T11:46:54.764131204Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2025-06-16T11:46:54.764158244Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=28.69µs grafana | logger=migrator t=2025-06-16T11:46:54.769109457Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T11:46:54.769136557Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=28.32µs grafana | logger=migrator t=2025-06-16T11:46:54.773831125Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" grafana | logger=migrator t=2025-06-16T11:46:54.773848855Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=61.291µs grafana | logger=migrator t=2025-06-16T11:46:54.77713153Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2025-06-16T11:46:54.777965584Z level=info msg="Migration successfully executed" id="create test_data table" duration=833.665µs grafana | logger=migrator t=2025-06-16T11:46:54.781376501Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2025-06-16T11:46:54.782626141Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.24861ms grafana | logger=migrator t=2025-06-16T11:46:54.787439902Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2025-06-16T11:46:54.789050169Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.609747ms grafana | logger=migrator t=2025-06-16T11:46:54.792626799Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2025-06-16T11:46:54.794601972Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.974094ms grafana | logger=migrator t=2025-06-16T11:46:54.799232878Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2025-06-16T11:46:54.799417141Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=181.703µs grafana | logger=migrator t=2025-06-16T11:46:54.802169157Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2025-06-16T11:46:54.802565305Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=394.978µs grafana | logger=migrator t=2025-06-16T11:46:54.806907487Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2025-06-16T11:46:54.806926547Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=19.29µs grafana | logger=migrator t=2025-06-16T11:46:54.809599722Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" grafana | logger=migrator t=2025-06-16T11:46:54.814000965Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=4.400303ms grafana | logger=migrator t=2025-06-16T11:46:54.81854Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2025-06-16T11:46:54.819435136Z level=info msg="Migration successfully executed" id="create team table" duration=894.786µs grafana | logger=migrator t=2025-06-16T11:46:54.824313577Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2025-06-16T11:46:54.825952534Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.638468ms grafana | logger=migrator t=2025-06-16T11:46:54.829827289Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2025-06-16T11:46:54.830834726Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.006847ms grafana | logger=migrator t=2025-06-16T11:46:54.834493247Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2025-06-16T11:46:54.842017593Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.521496ms grafana | logger=migrator t=2025-06-16T11:46:54.847586035Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2025-06-16T11:46:54.847823139Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=236.994µs grafana | logger=migrator t=2025-06-16T11:46:54.851347218Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:54.852288603Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=940.095µs grafana | logger=migrator t=2025-06-16T11:46:54.855725541Z level=info msg="Executing migration" id="Add column external_uid in team" grafana | logger=migrator t=2025-06-16T11:46:54.860254006Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=4.528055ms grafana | logger=migrator t=2025-06-16T11:46:54.898011936Z level=info msg="Executing migration" id="Add column is_provisioned in team" grafana | logger=migrator t=2025-06-16T11:46:54.907018496Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=9.00518ms grafana | logger=migrator t=2025-06-16T11:46:54.910504624Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2025-06-16T11:46:54.911126024Z level=info msg="Migration successfully executed" id="create team member table" duration=620.93µs grafana | logger=migrator t=2025-06-16T11:46:54.914325688Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2025-06-16T11:46:54.915107861Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=781.654µs grafana | logger=migrator t=2025-06-16T11:46:54.919471163Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2025-06-16T11:46:54.921091071Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.619168ms grafana | logger=migrator t=2025-06-16T11:46:54.925310051Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2025-06-16T11:46:54.9270598Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.75733ms grafana | logger=migrator t=2025-06-16T11:46:54.934525915Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2025-06-16T11:46:54.942113871Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.591226ms grafana | logger=migrator t=2025-06-16T11:46:54.946150219Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2025-06-16T11:46:54.949604806Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.453547ms grafana | logger=migrator t=2025-06-16T11:46:54.953965049Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2025-06-16T11:46:54.95883927Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.920782ms grafana | logger=migrator t=2025-06-16T11:46:54.961716978Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" grafana | logger=migrator t=2025-06-16T11:46:54.962604443Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=886.995µs grafana | logger=migrator t=2025-06-16T11:46:54.965596773Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2025-06-16T11:46:54.966401756Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=804.533µs grafana | logger=migrator t=2025-06-16T11:46:54.970706958Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2025-06-16T11:46:54.972225723Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.518425ms grafana | logger=migrator t=2025-06-16T11:46:54.975531598Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2025-06-16T11:46:54.977287898Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.75537ms grafana | logger=migrator t=2025-06-16T11:46:54.980507111Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2025-06-16T11:46:54.981428077Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=920.615µs grafana | logger=migrator t=2025-06-16T11:46:54.985929122Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2025-06-16T11:46:54.986790696Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=861.244µs grafana | logger=migrator t=2025-06-16T11:46:54.989640263Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2025-06-16T11:46:54.990542209Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=901.426µs grafana | logger=migrator t=2025-06-16T11:46:54.993484738Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2025-06-16T11:46:54.994350722Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=865.524µs grafana | logger=migrator t=2025-06-16T11:46:55.005405017Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2025-06-16T11:46:55.006936742Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.527715ms grafana | logger=migrator t=2025-06-16T11:46:55.010270948Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2025-06-16T11:46:55.010762476Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=491.488µs grafana | logger=migrator t=2025-06-16T11:46:55.015499625Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2025-06-16T11:46:55.015727619Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=226.144µs grafana | logger=migrator t=2025-06-16T11:46:55.017972626Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2025-06-16T11:46:55.018945963Z level=info msg="Migration successfully executed" id="create tag table" duration=972.177µs grafana | logger=migrator t=2025-06-16T11:46:55.022226697Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2025-06-16T11:46:55.023723451Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.492204ms grafana | logger=migrator t=2025-06-16T11:46:55.029755453Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2025-06-16T11:46:55.030546226Z level=info msg="Migration successfully executed" id="create login attempt table" duration=790.163µs grafana | logger=migrator t=2025-06-16T11:46:55.033915442Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2025-06-16T11:46:55.035289355Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.372383ms grafana | logger=migrator t=2025-06-16T11:46:55.040130875Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2025-06-16T11:46:55.041326646Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.197881ms grafana | logger=migrator t=2025-06-16T11:46:55.044643971Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:55.059931205Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.285504ms grafana | logger=migrator t=2025-06-16T11:46:55.064260438Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2025-06-16T11:46:55.064856697Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=597.14µs grafana | logger=migrator t=2025-06-16T11:46:55.068874775Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2025-06-16T11:46:55.069631877Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=756.942µs grafana | logger=migrator t=2025-06-16T11:46:55.073302218Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:55.073850107Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=547.499µs grafana | logger=migrator t=2025-06-16T11:46:55.077381427Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:55.078361083Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=978.996µs grafana | logger=migrator t=2025-06-16T11:46:55.082941959Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2025-06-16T11:46:55.083815244Z level=info msg="Migration successfully executed" id="create user auth table" duration=872.655µs grafana | logger=migrator t=2025-06-16T11:46:55.089445867Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2025-06-16T11:46:55.091158376Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.711909ms grafana | logger=migrator t=2025-06-16T11:46:55.09499345Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2025-06-16T11:46:55.09502628Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=33.95µs grafana | logger=migrator t=2025-06-16T11:46:55.1303347Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.136708735Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.374906ms grafana | logger=migrator t=2025-06-16T11:46:55.140116352Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.1453892Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.252628ms grafana | logger=migrator t=2025-06-16T11:46:55.148817877Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.154511422Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.692605ms grafana | logger=migrator t=2025-06-16T11:46:55.159987373Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.165381733Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.39463ms grafana | logger=migrator t=2025-06-16T11:46:55.168722909Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.169743156Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.019738ms grafana | logger=migrator t=2025-06-16T11:46:55.173009551Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.178277758Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.267098ms grafana | logger=migrator t=2025-06-16T11:46:55.182711571Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" grafana | logger=migrator t=2025-06-16T11:46:55.188208643Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=5.496202ms grafana | logger=migrator t=2025-06-16T11:46:55.192382743Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2025-06-16T11:46:55.193178376Z level=info msg="Migration successfully executed" id="create server_lock table" duration=795.163µs grafana | logger=migrator t=2025-06-16T11:46:55.196582323Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2025-06-16T11:46:55.19759001Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.009297ms grafana | logger=migrator t=2025-06-16T11:46:55.202152366Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2025-06-16T11:46:55.203055091Z level=info msg="Migration successfully executed" id="create user auth token table" duration=902.004µs grafana | logger=migrator t=2025-06-16T11:46:55.206660441Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2025-06-16T11:46:55.20780437Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.142249ms grafana | logger=migrator t=2025-06-16T11:46:55.213003627Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2025-06-16T11:46:55.214568123Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.563656ms grafana | logger=migrator t=2025-06-16T11:46:55.219251531Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2025-06-16T11:46:55.220198637Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=946.456µs grafana | logger=migrator t=2025-06-16T11:46:55.223601954Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2025-06-16T11:46:55.232233437Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.600123ms grafana | logger=migrator t=2025-06-16T11:46:55.261913003Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2025-06-16T11:46:55.263830054Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.919722ms grafana | logger=migrator t=2025-06-16T11:46:55.267556527Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" grafana | logger=migrator t=2025-06-16T11:46:55.273217891Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=5.660814ms grafana | logger=migrator t=2025-06-16T11:46:55.277762706Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2025-06-16T11:46:55.278715502Z level=info msg="Migration successfully executed" id="create cache_data table" duration=952.306µs grafana | logger=migrator t=2025-06-16T11:46:55.282146819Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2025-06-16T11:46:55.283123095Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=976.016µs grafana | logger=migrator t=2025-06-16T11:46:55.28754732Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2025-06-16T11:46:55.288406974Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=858.874µs grafana | logger=migrator t=2025-06-16T11:46:55.294703688Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2025-06-16T11:46:55.296557029Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.852161ms grafana | logger=migrator t=2025-06-16T11:46:55.300408783Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T11:46:55.300437184Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=29.511µs grafana | logger=migrator t=2025-06-16T11:46:55.306098088Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2025-06-16T11:46:55.306258161Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=159.133µs grafana | logger=migrator t=2025-06-16T11:46:55.309274381Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2025-06-16T11:46:55.310338339Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.062178ms grafana | logger=migrator t=2025-06-16T11:46:55.316479452Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T11:46:55.318213271Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.736839ms grafana | logger=migrator t=2025-06-16T11:46:55.321740759Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T11:46:55.322854287Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.113348ms grafana | logger=migrator t=2025-06-16T11:46:55.326274354Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T11:46:55.326291174Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=17.69µs grafana | logger=migrator t=2025-06-16T11:46:55.331360659Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T11:46:55.332394837Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.033768ms grafana | logger=migrator t=2025-06-16T11:46:55.335726103Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T11:46:55.337170216Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.442123ms grafana | logger=migrator t=2025-06-16T11:46:55.342899542Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2025-06-16T11:46:55.344764963Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.863512ms grafana | logger=migrator t=2025-06-16T11:46:55.349529923Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2025-06-16T11:46:55.350543689Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.013087ms grafana | logger=migrator t=2025-06-16T11:46:55.353802223Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2025-06-16T11:46:55.359549069Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.725136ms grafana | logger=migrator t=2025-06-16T11:46:55.389794194Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2025-06-16T11:46:55.391823517Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.028463ms grafana | logger=migrator t=2025-06-16T11:46:55.397240968Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2025-06-16T11:46:55.397502182Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=260.464µs grafana | logger=migrator t=2025-06-16T11:46:55.403954449Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2025-06-16T11:46:55.405701579Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.74263ms grafana | logger=migrator t=2025-06-16T11:46:55.409674165Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2025-06-16T11:46:55.411438664Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.763819ms grafana | logger=migrator t=2025-06-16T11:46:55.415210667Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2025-06-16T11:46:55.416240165Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.029008ms grafana | logger=migrator t=2025-06-16T11:46:55.421624464Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T11:46:55.421651895Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=28.871µs grafana | logger=migrator t=2025-06-16T11:46:55.426511945Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2025-06-16T11:46:55.428031331Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.518296ms grafana | logger=migrator t=2025-06-16T11:46:55.43159523Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2025-06-16T11:46:55.432628237Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.031657ms grafana | logger=migrator t=2025-06-16T11:46:55.436702555Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2025-06-16T11:46:55.437751642Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.048087ms grafana | logger=migrator t=2025-06-16T11:46:55.441218191Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2025-06-16T11:46:55.442241288Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.022497ms grafana | logger=migrator t=2025-06-16T11:46:55.447122299Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.457634804Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.512035ms grafana | logger=migrator t=2025-06-16T11:46:55.462054978Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.46278286Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=727.552µs grafana | logger=migrator t=2025-06-16T11:46:55.466352119Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.467324576Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=972.087µs grafana | logger=migrator t=2025-06-16T11:46:55.471763249Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.498522746Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.759547ms grafana | logger=migrator t=2025-06-16T11:46:55.514452632Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.54438656Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=29.933868ms grafana | logger=migrator t=2025-06-16T11:46:55.547802038Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.54853198Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=729.622µs grafana | logger=migrator t=2025-06-16T11:46:55.552599307Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.553277859Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=677.872µs grafana | logger=migrator t=2025-06-16T11:46:55.556777247Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2025-06-16T11:46:55.565789347Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.00936ms grafana | logger=migrator t=2025-06-16T11:46:55.569264695Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2025-06-16T11:46:55.575029321Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.763806ms grafana | logger=migrator t=2025-06-16T11:46:55.579639308Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:55.580730616Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.091238ms grafana | logger=migrator t=2025-06-16T11:46:55.584455538Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2025-06-16T11:46:55.585543886Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.087708ms grafana | logger=migrator t=2025-06-16T11:46:55.589208677Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2025-06-16T11:46:55.590258865Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.046118ms grafana | logger=migrator t=2025-06-16T11:46:55.595098155Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2025-06-16T11:46:55.59718652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.086555ms grafana | logger=migrator t=2025-06-16T11:46:55.601084926Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T11:46:55.601108116Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=23.54µs grafana | logger=migrator t=2025-06-16T11:46:55.604562023Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.611472178Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.910075ms grafana | logger=migrator t=2025-06-16T11:46:55.63733353Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.64818798Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=10.85435ms grafana | logger=migrator t=2025-06-16T11:46:55.653650741Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.658267158Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.620627ms grafana | logger=migrator t=2025-06-16T11:46:55.662394627Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2025-06-16T11:46:55.663066899Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=671.692µs grafana | logger=migrator t=2025-06-16T11:46:55.66674819Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2025-06-16T11:46:55.668329626Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.573966ms grafana | logger=migrator t=2025-06-16T11:46:55.673668514Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.681113429Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.445275ms grafana | logger=migrator t=2025-06-16T11:46:55.68419717Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.690269191Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.071241ms grafana | logger=migrator t=2025-06-16T11:46:55.693424585Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2025-06-16T11:46:55.694858508Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.433203ms grafana | logger=migrator t=2025-06-16T11:46:55.699933693Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:55.705948633Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.0146ms grafana | logger=migrator t=2025-06-16T11:46:55.709225378Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:55.715283618Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.05725ms grafana | logger=migrator t=2025-06-16T11:46:55.718548733Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:55.718568754Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=26.681µs grafana | logger=migrator t=2025-06-16T11:46:55.723742919Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:55.725053482Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.308253ms grafana | logger=migrator t=2025-06-16T11:46:55.728497629Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T11:46:55.730098316Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.599697ms grafana | logger=migrator t=2025-06-16T11:46:55.763427691Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2025-06-16T11:46:55.765704999Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.277908ms grafana | logger=migrator t=2025-06-16T11:46:55.771718069Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2025-06-16T11:46:55.771734939Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=17.81µs grafana | logger=migrator t=2025-06-16T11:46:55.774014427Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:55.780592988Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.574191ms grafana | logger=migrator t=2025-06-16T11:46:55.783627198Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:55.79097734Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.349082ms grafana | logger=migrator t=2025-06-16T11:46:55.794146523Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:55.80053548Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.388187ms grafana | logger=migrator t=2025-06-16T11:46:55.805626545Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:55.811838138Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.210723ms grafana | logger=migrator t=2025-06-16T11:46:55.815327236Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2025-06-16T11:46:55.819819861Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.486815ms grafana | logger=migrator t=2025-06-16T11:46:55.822890183Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:55.822905963Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=16.52µs grafana | logger=migrator t=2025-06-16T11:46:55.828449925Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2025-06-16T11:46:55.829209878Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=759.503µs grafana | logger=migrator t=2025-06-16T11:46:55.834475905Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.843971083Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.497498ms grafana | logger=migrator t=2025-06-16T11:46:55.847174597Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2025-06-16T11:46:55.847190587Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=16.88µs grafana | logger=migrator t=2025-06-16T11:46:55.852655828Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.859079946Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.423098ms grafana | logger=migrator t=2025-06-16T11:46:55.880305729Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2025-06-16T11:46:55.881977397Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.669368ms grafana | logger=migrator t=2025-06-16T11:46:55.886063585Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.893853225Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.78993ms grafana | logger=migrator t=2025-06-16T11:46:55.897111999Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2025-06-16T11:46:55.897948114Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=835.955µs grafana | logger=migrator t=2025-06-16T11:46:55.904456942Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2025-06-16T11:46:55.905523839Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.066717ms grafana | logger=migrator t=2025-06-16T11:46:55.908782523Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.915166961Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.383948ms grafana | logger=migrator t=2025-06-16T11:46:55.920726714Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2025-06-16T11:46:55.921554637Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=827.004µs grafana | logger=migrator t=2025-06-16T11:46:55.924572857Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2025-06-16T11:46:55.925652064Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.078237ms grafana | logger=migrator t=2025-06-16T11:46:55.930298912Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2025-06-16T11:46:55.931112206Z level=info msg="Migration successfully executed" id="create alert_image table" duration=815.564µs grafana | logger=migrator t=2025-06-16T11:46:55.935202844Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2025-06-16T11:46:55.93732873Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=2.123786ms grafana | logger=migrator t=2025-06-16T11:46:55.942625929Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2025-06-16T11:46:55.942654709Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=30.021µs grafana | logger=migrator t=2025-06-16T11:46:55.945946024Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2025-06-16T11:46:55.946868619Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=922.435µs grafana | logger=migrator t=2025-06-16T11:46:55.9499622Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.950888635Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=926.265µs grafana | logger=migrator t=2025-06-16T11:46:55.956762114Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T11:46:55.95714323Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2025-06-16T11:46:55.961430511Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2025-06-16T11:46:55.962104632Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=676.541µs grafana | logger=migrator t=2025-06-16T11:46:55.965628182Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2025-06-16T11:46:55.967277339Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.648467ms grafana | logger=migrator t=2025-06-16T11:46:55.971680732Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2025-06-16T11:46:55.978454525Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.773253ms grafana | logger=migrator t=2025-06-16T11:46:56.006241768Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2025-06-16T11:46:56.007859616Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.604857ms grafana | logger=migrator t=2025-06-16T11:46:56.013165194Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2025-06-16T11:46:56.014856802Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.691278ms grafana | logger=migrator t=2025-06-16T11:46:56.018606525Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2025-06-16T11:46:56.019462749Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=855.804µs grafana | logger=migrator t=2025-06-16T11:46:56.022799194Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2025-06-16T11:46:56.02378691Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=986.956µs grafana | logger=migrator t=2025-06-16T11:46:56.028054692Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:56.029043358Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=988.226µs grafana | logger=migrator t=2025-06-16T11:46:56.033151107Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2025-06-16T11:46:56.033176217Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=26µs grafana | logger=migrator t=2025-06-16T11:46:56.03873039Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2025-06-16T11:46:56.03875855Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=33.97µs grafana | logger=migrator t=2025-06-16T11:46:56.042206978Z level=info msg="Executing migration" id="add library_element folder uid" grafana | logger=migrator t=2025-06-16T11:46:56.053256572Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=11.049984ms grafana | logger=migrator t=2025-06-16T11:46:56.057264668Z level=info msg="Executing migration" id="populate library_element folder_uid" grafana | logger=migrator t=2025-06-16T11:46:56.057548854Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=283.826µs grafana | logger=migrator t=2025-06-16T11:46:56.061232335Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" grafana | logger=migrator t=2025-06-16T11:46:56.062316733Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=1.083938ms grafana | logger=migrator t=2025-06-16T11:46:56.065598067Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2025-06-16T11:46:56.065862901Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=267.624µs grafana | logger=migrator t=2025-06-16T11:46:56.069246768Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2025-06-16T11:46:56.071390293Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.140665ms grafana | logger=migrator t=2025-06-16T11:46:56.079255055Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2025-06-16T11:46:56.080163701Z level=info msg="Migration successfully executed" id="create secrets table" duration=908.865µs grafana | logger=migrator t=2025-06-16T11:46:56.083635378Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2025-06-16T11:46:56.120115005Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=36.478647ms grafana | logger=migrator t=2025-06-16T11:46:56.127534929Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2025-06-16T11:46:56.137522775Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.990256ms grafana | logger=migrator t=2025-06-16T11:46:56.142168383Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2025-06-16T11:46:56.142427958Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=259.065µs grafana | logger=migrator t=2025-06-16T11:46:56.146119229Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2025-06-16T11:46:56.18160487Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.485411ms grafana | logger=migrator t=2025-06-16T11:46:56.184775973Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2025-06-16T11:46:56.212024407Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.247614ms grafana | logger=migrator t=2025-06-16T11:46:56.217001089Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2025-06-16T11:46:56.217733212Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=731.703µs grafana | logger=migrator t=2025-06-16T11:46:56.25003249Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2025-06-16T11:46:56.252153605Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.121905ms grafana | logger=migrator t=2025-06-16T11:46:56.255527951Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2025-06-16T11:46:56.255769955Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=241.854µs grafana | logger=migrator t=2025-06-16T11:46:56.260095468Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2025-06-16T11:46:56.260962172Z level=info msg="Migration successfully executed" id="create permission table" duration=866.274µs grafana | logger=migrator t=2025-06-16T11:46:56.264033393Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2025-06-16T11:46:56.265699311Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.665248ms grafana | logger=migrator t=2025-06-16T11:46:56.27166165Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2025-06-16T11:46:56.273141785Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.479744ms grafana | logger=migrator t=2025-06-16T11:46:56.278220219Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2025-06-16T11:46:56.279500691Z level=info msg="Migration successfully executed" id="create role table" duration=1.279582ms grafana | logger=migrator t=2025-06-16T11:46:56.282770796Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2025-06-16T11:46:56.290535425Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.764449ms grafana | logger=migrator t=2025-06-16T11:46:56.294698174Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2025-06-16T11:46:56.304720711Z level=info msg="Migration successfully executed" id="add column group_name" duration=9.994347ms grafana | logger=migrator t=2025-06-16T11:46:56.31002892Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2025-06-16T11:46:56.311394072Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.360862ms grafana | logger=migrator t=2025-06-16T11:46:56.315041253Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2025-06-16T11:46:56.315914258Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=872.755µs grafana | logger=migrator t=2025-06-16T11:46:56.321110195Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:56.321994019Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=883.284µs grafana | logger=migrator t=2025-06-16T11:46:56.326501895Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2025-06-16T11:46:56.328126341Z level=info msg="Migration successfully executed" id="create team role table" duration=1.624297ms grafana | logger=migrator t=2025-06-16T11:46:56.334588549Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2025-06-16T11:46:56.335959302Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.371393ms grafana | logger=migrator t=2025-06-16T11:46:56.339198316Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2025-06-16T11:46:56.340333634Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.134658ms grafana | logger=migrator t=2025-06-16T11:46:56.370734071Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2025-06-16T11:46:56.372563601Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.8287ms grafana | logger=migrator t=2025-06-16T11:46:56.376786561Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2025-06-16T11:46:56.378107244Z level=info msg="Migration successfully executed" id="create user role table" duration=1.319963ms grafana | logger=migrator t=2025-06-16T11:46:56.381344068Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2025-06-16T11:46:56.382400645Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.055907ms grafana | logger=migrator t=2025-06-16T11:46:56.386915131Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2025-06-16T11:46:56.388031939Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.116188ms grafana | logger=migrator t=2025-06-16T11:46:56.39106936Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2025-06-16T11:46:56.392149558Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.079948ms grafana | logger=migrator t=2025-06-16T11:46:56.396551662Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2025-06-16T11:46:56.397385315Z level=info msg="Migration successfully executed" id="create builtin role table" duration=833.273µs grafana | logger=migrator t=2025-06-16T11:46:56.401398572Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2025-06-16T11:46:56.40243301Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.034258ms grafana | logger=migrator t=2025-06-16T11:46:56.406947074Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2025-06-16T11:46:56.408601462Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.654098ms grafana | logger=migrator t=2025-06-16T11:46:56.411833146Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2025-06-16T11:46:56.421650119Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.816863ms grafana | logger=migrator t=2025-06-16T11:46:56.425088297Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2025-06-16T11:46:56.426126424Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.037807ms grafana | logger=migrator t=2025-06-16T11:46:56.430717811Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2025-06-16T11:46:56.431787598Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.069337ms grafana | logger=migrator t=2025-06-16T11:46:56.434809478Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:56.435853317Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.043288ms grafana | logger=migrator t=2025-06-16T11:46:56.438625193Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2025-06-16T11:46:56.439635019Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.009186ms grafana | logger=migrator t=2025-06-16T11:46:56.444213806Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2025-06-16T11:46:56.445006579Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=791.903µs grafana | logger=migrator t=2025-06-16T11:46:56.449929651Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2025-06-16T11:46:56.451181371Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.24859ms grafana | logger=migrator t=2025-06-16T11:46:56.45411303Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2025-06-16T11:46:56.462347487Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.233467ms grafana | logger=migrator t=2025-06-16T11:46:56.541709089Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2025-06-16T11:46:56.551489443Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.780063ms grafana | logger=migrator t=2025-06-16T11:46:56.557661235Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2025-06-16T11:46:56.563551784Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.889888ms grafana | logger=migrator t=2025-06-16T11:46:56.577958983Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2025-06-16T11:46:56.588595811Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.637238ms grafana | logger=migrator t=2025-06-16T11:46:56.593725036Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2025-06-16T11:46:56.594773473Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.048057ms grafana | logger=migrator t=2025-06-16T11:46:56.599165767Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2025-06-16T11:46:56.600224234Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.054947ms grafana | logger=migrator t=2025-06-16T11:46:56.607323773Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2025-06-16T11:46:56.608494812Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.170899ms grafana | logger=migrator t=2025-06-16T11:46:56.611713466Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" grafana | logger=migrator t=2025-06-16T11:46:56.622460375Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=10.746439ms grafana | logger=migrator t=2025-06-16T11:46:56.62994906Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" grafana | logger=migrator t=2025-06-16T11:46:56.631669428Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=1.719898ms grafana | logger=migrator t=2025-06-16T11:46:56.656609484Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" grafana | logger=migrator t=2025-06-16T11:46:56.658513806Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=1.902892ms grafana | logger=migrator t=2025-06-16T11:46:56.664442874Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2025-06-16T11:46:56.665342399Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=899.145µs grafana | logger=migrator t=2025-06-16T11:46:56.671507883Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2025-06-16T11:46:56.672759093Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.249621ms grafana | logger=migrator t=2025-06-16T11:46:56.681017001Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2025-06-16T11:46:56.681037461Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=21.12µs grafana | logger=migrator t=2025-06-16T11:46:56.687919996Z level=info msg="Executing migration" id="create query_history_details table v1" grafana | logger=migrator t=2025-06-16T11:46:56.689232087Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=1.310311ms grafana | logger=migrator t=2025-06-16T11:46:56.694752659Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2025-06-16T11:46:56.69479306Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=41.351µs grafana | logger=migrator t=2025-06-16T11:46:56.699069132Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2025-06-16T11:46:56.699633921Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=561.169µs grafana | logger=migrator t=2025-06-16T11:46:56.704618275Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2025-06-16T11:46:56.705635101Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.015297ms grafana | logger=migrator t=2025-06-16T11:46:56.709523835Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2025-06-16T11:46:56.710786387Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.262332ms grafana | logger=migrator t=2025-06-16T11:46:56.714289565Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2025-06-16T11:46:56.714497938Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=208.503µs grafana | logger=migrator t=2025-06-16T11:46:56.718879242Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2025-06-16T11:46:56.71935517Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=475.588µs grafana | logger=migrator t=2025-06-16T11:46:56.721986924Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2025-06-16T11:46:56.723222164Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.23506ms grafana | logger=migrator t=2025-06-16T11:46:56.726842554Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2025-06-16T11:46:56.728519102Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.676708ms grafana | logger=migrator t=2025-06-16T11:46:56.732988507Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2025-06-16T11:46:56.74219216Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.203263ms grafana | logger=migrator t=2025-06-16T11:46:56.746844528Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2025-06-16T11:46:56.746865748Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=22.39µs grafana | logger=migrator t=2025-06-16T11:46:56.77883607Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2025-06-16T11:46:56.780895815Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.060595ms grafana | logger=migrator t=2025-06-16T11:46:56.786196003Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2025-06-16T11:46:56.788151605Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.955342ms grafana | logger=migrator t=2025-06-16T11:46:56.791885348Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2025-06-16T11:46:56.79385074Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.964802ms grafana | logger=migrator t=2025-06-16T11:46:56.797939249Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2025-06-16T11:46:56.806749855Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.809896ms grafana | logger=migrator t=2025-06-16T11:46:56.813304324Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.815782556Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.474092ms grafana | logger=migrator t=2025-06-16T11:46:56.821685414Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.823749049Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.063405ms grafana | logger=migrator t=2025-06-16T11:46:56.827584062Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:56.854538672Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=26.925049ms grafana | logger=migrator t=2025-06-16T11:46:56.861317104Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2025-06-16T11:46:56.863387209Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.070845ms grafana | logger=migrator t=2025-06-16T11:46:56.868939061Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:56.870114012Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.174091ms grafana | logger=migrator t=2025-06-16T11:46:56.875484101Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:56.877313321Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.82884ms grafana | logger=migrator t=2025-06-16T11:46:56.914640592Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2025-06-16T11:46:56.916042936Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.404584ms grafana | logger=migrator t=2025-06-16T11:46:56.919054506Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:56.91929052Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=233.494µs grafana | logger=migrator t=2025-06-16T11:46:56.920976739Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:56.921645849Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=668.62µs grafana | logger=migrator t=2025-06-16T11:46:56.925382851Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2025-06-16T11:46:56.932712004Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.328513ms grafana | logger=migrator t=2025-06-16T11:46:56.936355405Z level=info msg="Executing migration" id="add type column" grafana | logger=migrator t=2025-06-16T11:46:56.942387525Z level=info msg="Migration successfully executed" id="add type column" duration=6.03161ms grafana | logger=migrator t=2025-06-16T11:46:56.94512166Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2025-06-16T11:46:56.945794712Z level=info msg="Migration successfully executed" id="create entity_events table" duration=672.652µs grafana | logger=migrator t=2025-06-16T11:46:56.950949398Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2025-06-16T11:46:56.952910431Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.955992ms grafana | logger=migrator t=2025-06-16T11:46:56.958585435Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.959371768Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.963584448Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.96429211Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.967867379Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2025-06-16T11:46:56.968870637Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.004128ms grafana | logger=migrator t=2025-06-16T11:46:56.973549945Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2025-06-16T11:46:56.97510881Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.558085ms grafana | logger=migrator t=2025-06-16T11:46:56.979012405Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.981293583Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.281438ms grafana | logger=migrator t=2025-06-16T11:46:56.989252976Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2025-06-16T11:46:56.990807972Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.556186ms grafana | logger=migrator t=2025-06-16T11:46:56.996153861Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:56.997561474Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.415143ms grafana | logger=migrator t=2025-06-16T11:46:57.001121574Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:57.002421206Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.299873ms grafana | logger=migrator t=2025-06-16T11:46:57.033017155Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2025-06-16T11:46:57.034272866Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.258691ms grafana | logger=migrator t=2025-06-16T11:46:57.03992627Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2025-06-16T11:46:57.041326274Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.396994ms grafana | logger=migrator t=2025-06-16T11:46:57.045043765Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:57.046344167Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.300911ms grafana | logger=migrator t=2025-06-16T11:46:57.050936163Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:57.052328757Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.393494ms grafana | logger=migrator t=2025-06-16T11:46:57.056307623Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2025-06-16T11:46:57.057624195Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.317432ms grafana | logger=migrator t=2025-06-16T11:46:57.064337587Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2025-06-16T11:46:57.086098179Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.757132ms grafana | logger=migrator t=2025-06-16T11:46:57.09341236Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2025-06-16T11:46:57.103904275Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.488265ms grafana | logger=migrator t=2025-06-16T11:46:57.107816Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2025-06-16T11:46:57.117843458Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.989777ms grafana | logger=migrator t=2025-06-16T11:46:57.1233736Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2025-06-16T11:46:57.123922589Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=550.419µs grafana | logger=migrator t=2025-06-16T11:46:57.127292945Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2025-06-16T11:46:57.13841556Z level=info msg="Migration successfully executed" id="add share column" duration=11.116715ms grafana | logger=migrator t=2025-06-16T11:46:57.168247007Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2025-06-16T11:46:57.169662921Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=1.416314ms grafana | logger=migrator t=2025-06-16T11:46:57.174868237Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2025-06-16T11:46:57.176516965Z level=info msg="Migration successfully executed" id="create file table" duration=1.647948ms grafana | logger=migrator t=2025-06-16T11:46:57.180095714Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2025-06-16T11:46:57.181381176Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.282702ms grafana | logger=migrator t=2025-06-16T11:46:57.184530669Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2025-06-16T11:46:57.185792959Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.26197ms grafana | logger=migrator t=2025-06-16T11:46:57.190715461Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2025-06-16T11:46:57.191684007Z level=info msg="Migration successfully executed" id="create file_meta table" duration=967.946µs grafana | logger=migrator t=2025-06-16T11:46:57.194854751Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2025-06-16T11:46:57.196279104Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.424073ms grafana | logger=migrator t=2025-06-16T11:46:57.199421876Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2025-06-16T11:46:57.199441296Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=20.27µs grafana | logger=migrator t=2025-06-16T11:46:57.203691998Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2025-06-16T11:46:57.203710678Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=19.61µs grafana | logger=migrator t=2025-06-16T11:46:57.20744217Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2025-06-16T11:46:57.208355155Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=912.135µs grafana | logger=migrator t=2025-06-16T11:46:57.211859304Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2025-06-16T11:46:57.212279881Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=419.627µs grafana | logger=migrator t=2025-06-16T11:46:57.21584205Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2025-06-16T11:46:57.217256504Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.413934ms grafana | logger=migrator t=2025-06-16T11:46:57.222284077Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2025-06-16T11:46:57.23206535Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.780583ms grafana | logger=migrator t=2025-06-16T11:46:57.235415606Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2025-06-16T11:46:57.235596319Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=180.223µs grafana | logger=migrator t=2025-06-16T11:46:57.238907254Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2025-06-16T11:46:57.2398184Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=910.796µs grafana | logger=migrator t=2025-06-16T11:46:57.244315114Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2025-06-16T11:46:57.245004725Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=694.781µs grafana | logger=migrator t=2025-06-16T11:46:57.249783955Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2025-06-16T11:46:57.250266993Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=482.208µs grafana | logger=migrator t=2025-06-16T11:46:57.254440113Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2025-06-16T11:46:57.255027742Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=586.779µs grafana | logger=migrator t=2025-06-16T11:46:57.258373698Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2025-06-16T11:46:57.268360584Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.984936ms grafana | logger=migrator t=2025-06-16T11:46:57.275152507Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2025-06-16T11:46:57.287468833Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=12.316896ms grafana | logger=migrator t=2025-06-16T11:46:57.291218085Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2025-06-16T11:46:57.292587368Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.369533ms grafana | logger=migrator t=2025-06-16T11:46:57.297091453Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2025-06-16T11:46:57.372969316Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.876653ms grafana | logger=migrator t=2025-06-16T11:46:57.380924089Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2025-06-16T11:46:57.382143089Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.21919ms grafana | logger=migrator t=2025-06-16T11:46:57.385671258Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2025-06-16T11:46:57.3876334Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.961472ms grafana | logger=migrator t=2025-06-16T11:46:57.392803806Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2025-06-16T11:46:57.417099462Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.295256ms grafana | logger=migrator t=2025-06-16T11:46:57.421661067Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2025-06-16T11:46:57.428064054Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.402517ms grafana | logger=migrator t=2025-06-16T11:46:57.432833823Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2025-06-16T11:46:57.433058838Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=227.685µs grafana | logger=migrator t=2025-06-16T11:46:57.43681382Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2025-06-16T11:46:57.437068714Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=255.084µs grafana | logger=migrator t=2025-06-16T11:46:57.440900837Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2025-06-16T11:46:57.441327174Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=426.027µs grafana | logger=migrator t=2025-06-16T11:46:57.445262731Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2025-06-16T11:46:57.445673437Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=410.446µs grafana | logger=migrator t=2025-06-16T11:46:57.450184553Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2025-06-16T11:46:57.450529028Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=344.185µs grafana | logger=migrator t=2025-06-16T11:46:57.453896254Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2025-06-16T11:46:57.455309107Z level=info msg="Migration successfully executed" id="create folder table" duration=1.408143ms grafana | logger=migrator t=2025-06-16T11:46:57.458764526Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2025-06-16T11:46:57.459855794Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.090858ms grafana | logger=migrator t=2025-06-16T11:46:57.464376329Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2025-06-16T11:46:57.465444086Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.067277ms grafana | logger=migrator t=2025-06-16T11:46:57.468810892Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2025-06-16T11:46:57.468859003Z level=info msg="Migration successfully executed" id="Update folder title length" duration=48.381µs grafana | logger=migrator t=2025-06-16T11:46:57.484087917Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T11:46:57.486285944Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.196227ms grafana | logger=migrator t=2025-06-16T11:46:57.491936888Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2025-06-16T11:46:57.493627496Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.689878ms grafana | logger=migrator t=2025-06-16T11:46:57.498830563Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2025-06-16T11:46:57.499955641Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.124318ms grafana | logger=migrator t=2025-06-16T11:46:57.50293514Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2025-06-16T11:46:57.503388799Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=452.479µs grafana | logger=migrator t=2025-06-16T11:46:57.509590062Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2025-06-16T11:46:57.510144441Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=554.549µs grafana | logger=migrator t=2025-06-16T11:46:57.51547072Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" grafana | logger=migrator t=2025-06-16T11:46:57.517174498Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.703318ms grafana | logger=migrator t=2025-06-16T11:46:57.521125394Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" grafana | logger=migrator t=2025-06-16T11:46:57.522303743Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.176289ms grafana | logger=migrator t=2025-06-16T11:46:57.525430646Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T11:46:57.526467473Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.036367ms grafana | logger=migrator t=2025-06-16T11:46:57.531459976Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T11:46:57.532561984Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.101448ms grafana | logger=migrator t=2025-06-16T11:46:57.538105117Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" grafana | logger=migrator t=2025-06-16T11:46:57.539435929Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.330052ms grafana | logger=migrator t=2025-06-16T11:46:57.543636739Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" grafana | logger=migrator t=2025-06-16T11:46:57.544676556Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=1.039527ms grafana | logger=migrator t=2025-06-16T11:46:57.548769874Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2025-06-16T11:46:57.549683809Z level=info msg="Migration successfully executed" id="create anon_device table" duration=913.175µs grafana | logger=migrator t=2025-06-16T11:46:57.553010225Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2025-06-16T11:46:57.554104203Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.093458ms grafana | logger=migrator t=2025-06-16T11:46:57.559399401Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2025-06-16T11:46:57.560780964Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.380993ms grafana | logger=migrator t=2025-06-16T11:46:57.564254472Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2025-06-16T11:46:57.566842835Z level=info msg="Migration successfully executed" id="create signing_key table" duration=2.589313ms grafana | logger=migrator t=2025-06-16T11:46:57.571676046Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2025-06-16T11:46:57.572718533Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.041807ms grafana | logger=migrator t=2025-06-16T11:46:57.57796338Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2025-06-16T11:46:57.579157671Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.194071ms grafana | logger=migrator t=2025-06-16T11:46:57.583273909Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2025-06-16T11:46:57.583549303Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=275.814µs grafana | logger=migrator t=2025-06-16T11:46:57.596426608Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2025-06-16T11:46:57.607664925Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.235897ms grafana | logger=migrator t=2025-06-16T11:46:57.612417295Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2025-06-16T11:46:57.613203697Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=787.313µs grafana | logger=migrator t=2025-06-16T11:46:57.616885828Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T11:46:57.616937349Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=55.501µs grafana | logger=migrator t=2025-06-16T11:46:57.622512052Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T11:46:57.624536266Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.024484ms grafana | logger=migrator t=2025-06-16T11:46:57.627942073Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2025-06-16T11:46:57.627959713Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=18.3µs grafana | logger=migrator t=2025-06-16T11:46:57.633307592Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T11:46:57.634647775Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.338383ms grafana | logger=migrator t=2025-06-16T11:46:57.63856927Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2025-06-16T11:46:57.641135152Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.563972ms grafana | logger=migrator t=2025-06-16T11:46:57.645701688Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2025-06-16T11:46:57.647541689Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.839981ms grafana | logger=migrator t=2025-06-16T11:46:57.656572749Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2025-06-16T11:46:57.657645218Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.075189ms grafana | logger=migrator t=2025-06-16T11:46:57.661934849Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2025-06-16T11:46:57.663095519Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.16181ms grafana | logger=migrator t=2025-06-16T11:46:57.667641514Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2025-06-16T11:46:57.668208943Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=571.9µs grafana | logger=migrator t=2025-06-16T11:46:57.671691051Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" grafana | logger=migrator t=2025-06-16T11:46:57.672331802Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=640.391µs grafana | logger=migrator t=2025-06-16T11:46:57.675546645Z level=info msg="Executing migration" id="create cloud_migration table v1" grafana | logger=migrator t=2025-06-16T11:46:57.676405469Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=858.634µs grafana | logger=migrator t=2025-06-16T11:46:57.679578013Z level=info msg="Executing migration" id="create cloud_migration_run table v1" grafana | logger=migrator t=2025-06-16T11:46:57.680473948Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=895.555µs grafana | logger=migrator t=2025-06-16T11:46:57.686324325Z level=info msg="Executing migration" id="add stack_id column" grafana | logger=migrator t=2025-06-16T11:46:57.698080171Z level=info msg="Migration successfully executed" id="add stack_id column" duration=11.755805ms grafana | logger=migrator t=2025-06-16T11:46:57.709182825Z level=info msg="Executing migration" id="add region_slug column" grafana | logger=migrator t=2025-06-16T11:46:57.721641584Z level=info msg="Migration successfully executed" id="add region_slug column" duration=12.459039ms grafana | logger=migrator t=2025-06-16T11:46:57.727108255Z level=info msg="Executing migration" id="add cluster_slug column" grafana | logger=migrator t=2025-06-16T11:46:57.737735111Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=10.625766ms grafana | logger=migrator t=2025-06-16T11:46:57.743966725Z level=info msg="Executing migration" id="add migration uid column" grafana | logger=migrator t=2025-06-16T11:46:57.753016796Z level=info msg="Migration successfully executed" id="add migration uid column" duration=9.047991ms grafana | logger=migrator t=2025-06-16T11:46:57.75747819Z level=info msg="Executing migration" id="Update uid column values for migration" grafana | logger=migrator t=2025-06-16T11:46:57.757659383Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=197.314µs grafana | logger=migrator t=2025-06-16T11:46:57.76107923Z level=info msg="Executing migration" id="Add unique index migration_uid" grafana | logger=migrator t=2025-06-16T11:46:57.762237599Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=1.157699ms grafana | logger=migrator t=2025-06-16T11:46:57.76646976Z level=info msg="Executing migration" id="add migration run uid column" grafana | logger=migrator t=2025-06-16T11:46:57.775664553Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=9.195143ms grafana | logger=migrator t=2025-06-16T11:46:57.779058889Z level=info msg="Executing migration" id="Update uid column values for migration run" grafana | logger=migrator t=2025-06-16T11:46:57.779212412Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=153.403µs grafana | logger=migrator t=2025-06-16T11:46:57.781649502Z level=info msg="Executing migration" id="Add unique index migration_run_uid" grafana | logger=migrator t=2025-06-16T11:46:57.782588818Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=939.096µs grafana | logger=migrator t=2025-06-16T11:46:57.787326747Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:57.80973326Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=22.406083ms grafana | logger=migrator t=2025-06-16T11:46:57.818295033Z level=info msg="Executing migration" id="create cloud_migration_session v2" grafana | logger=migrator t=2025-06-16T11:46:57.818996044Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=700.391µs grafana | logger=migrator t=2025-06-16T11:46:57.823206205Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:57.824074389Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=867.724µs grafana | logger=migrator t=2025-06-16T11:46:57.828164288Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:57.828495353Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=330.935µs grafana | logger=migrator t=2025-06-16T11:46:57.831852789Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:57.832722983Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=870.054µs grafana | logger=migrator t=2025-06-16T11:46:57.837986471Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" grafana | logger=migrator t=2025-06-16T11:46:57.865665342Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=27.678751ms grafana | logger=migrator t=2025-06-16T11:46:57.872746479Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" grafana | logger=migrator t=2025-06-16T11:46:57.873558733Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=812.044µs grafana | logger=migrator t=2025-06-16T11:46:57.877083112Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" grafana | logger=migrator t=2025-06-16T11:46:57.878954423Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.870551ms grafana | logger=migrator t=2025-06-16T11:46:57.882561633Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" grafana | logger=migrator t=2025-06-16T11:46:57.883102712Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=540.489µs grafana | logger=migrator t=2025-06-16T11:46:57.888144676Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" grafana | logger=migrator t=2025-06-16T11:46:57.888952959Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=807.843µs grafana | logger=migrator t=2025-06-16T11:46:57.89256689Z level=info msg="Executing migration" id="add snapshot upload_url column" grafana | logger=migrator t=2025-06-16T11:46:57.905039017Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=12.472037ms grafana | logger=migrator t=2025-06-16T11:46:57.908568676Z level=info msg="Executing migration" id="add snapshot status column" grafana | logger=migrator t=2025-06-16T11:46:57.915575513Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=7.005507ms grafana | logger=migrator t=2025-06-16T11:46:57.958590419Z level=info msg="Executing migration" id="add snapshot local_directory column" grafana | logger=migrator t=2025-06-16T11:46:57.971430553Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=12.841614ms grafana | logger=migrator t=2025-06-16T11:46:57.978206206Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" grafana | logger=migrator t=2025-06-16T11:46:57.987052463Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=8.845197ms grafana | logger=migrator t=2025-06-16T11:46:57.990848187Z level=info msg="Executing migration" id="add snapshot encryption_key column" grafana | logger=migrator t=2025-06-16T11:46:58.000500807Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=9.65205ms grafana | logger=migrator t=2025-06-16T11:46:58.005982639Z level=info msg="Executing migration" id="add snapshot error_string column" grafana | logger=migrator t=2025-06-16T11:46:58.014694753Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=8.710254ms grafana | logger=migrator t=2025-06-16T11:46:58.018180981Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" grafana | logger=migrator t=2025-06-16T11:46:58.019108367Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=927.136µs grafana | logger=migrator t=2025-06-16T11:46:58.022776948Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" grafana | logger=migrator t=2025-06-16T11:46:58.06131037Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=38.535352ms grafana | logger=migrator t=2025-06-16T11:46:58.081605507Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" grafana | logger=migrator t=2025-06-16T11:46:58.092108062Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=10.502705ms grafana | logger=migrator t=2025-06-16T11:46:58.095611051Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" grafana | logger=migrator t=2025-06-16T11:46:58.105125379Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=9.513917ms grafana | logger=migrator t=2025-06-16T11:46:58.1100122Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" grafana | logger=migrator t=2025-06-16T11:46:58.120689518Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=10.677658ms grafana | logger=migrator t=2025-06-16T11:46:58.125980486Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" grafana | logger=migrator t=2025-06-16T11:46:58.13643371Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=10.493355ms grafana | logger=migrator t=2025-06-16T11:46:58.139862947Z level=info msg="Executing migration" id="increase resource_uid column length" grafana | logger=migrator t=2025-06-16T11:46:58.139880458Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=17.961µs grafana | logger=migrator t=2025-06-16T11:46:58.142957139Z level=info msg="Executing migration" id="alter kv_store.value to longtext" grafana | logger=migrator t=2025-06-16T11:46:58.142974139Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=17.01µs grafana | logger=migrator t=2025-06-16T11:46:58.145397049Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:58.155219593Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.819263ms grafana | logger=migrator t=2025-06-16T11:46:58.160093174Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.169223776Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.121771ms grafana | logger=migrator t=2025-06-16T11:46:58.173140991Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" grafana | logger=migrator t=2025-06-16T11:46:58.173707451Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=566.33µs grafana | logger=migrator t=2025-06-16T11:46:58.17725823Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" grafana | logger=migrator t=2025-06-16T11:46:58.177699187Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=440.387µs grafana | logger=migrator t=2025-06-16T11:46:58.191333224Z level=info msg="Executing migration" id="add record column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:58.204055476Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.722412ms grafana | logger=migrator t=2025-06-16T11:46:58.208596651Z level=info msg="Executing migration" id="add record column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.218247262Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=9.650161ms grafana | logger=migrator t=2025-06-16T11:46:58.22589585Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T11:46:58.237428492Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=11.532141ms grafana | logger=migrator t=2025-06-16T11:46:58.241220454Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" grafana | logger=migrator t=2025-06-16T11:46:58.248982024Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=7.76044ms grafana | logger=migrator t=2025-06-16T11:46:58.254050598Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" grafana | logger=migrator t=2025-06-16T11:46:58.254602557Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=551.709µs grafana | logger=migrator t=2025-06-16T11:46:58.257700639Z level=info msg="Executing migration" id="add metadata column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:58.267451801Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=9.750152ms grafana | logger=migrator t=2025-06-16T11:46:58.271485198Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.28123814Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=9.751112ms grafana | logger=migrator t=2025-06-16T11:46:58.285905618Z level=info msg="Executing migration" id="delete orphaned service account permissions" grafana | logger=migrator t=2025-06-16T11:46:58.286187513Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=282.175µs grafana | logger=migrator t=2025-06-16T11:46:58.299034337Z level=info msg="Executing migration" id="adding action set permissions" grafana | logger=migrator t=2025-06-16T11:46:58.299856801Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=819.764µs grafana | logger=migrator t=2025-06-16T11:46:58.306091814Z level=info msg="Executing migration" id="create user_external_session table" grafana | logger=migrator t=2025-06-16T11:46:58.307785453Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=1.693119ms grafana | logger=migrator t=2025-06-16T11:46:58.312576733Z level=info msg="Executing migration" id="increase name_id column length to 1024" grafana | logger=migrator t=2025-06-16T11:46:58.312603933Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=28.39µs grafana | logger=migrator t=2025-06-16T11:46:58.316157562Z level=info msg="Executing migration" id="increase session_id column length to 1024" grafana | logger=migrator t=2025-06-16T11:46:58.316188793Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=26.52µs grafana | logger=migrator t=2025-06-16T11:46:58.322086021Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" grafana | logger=migrator t=2025-06-16T11:46:58.32265558Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=564.329µs grafana | logger=migrator t=2025-06-16T11:46:58.327188316Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.338591665Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=11.403159ms grafana | logger=migrator t=2025-06-16T11:46:58.342943937Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:58.349793412Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=6.849115ms grafana | logger=migrator t=2025-06-16T11:46:58.353084517Z level=info msg="Executing migration" id="add alert_rule_state table" grafana | logger=migrator t=2025-06-16T11:46:58.354055752Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=970.805µs grafana | logger=migrator t=2025-06-16T11:46:58.36047452Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" grafana | logger=migrator t=2025-06-16T11:46:58.362460402Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=1.985182ms grafana | logger=migrator t=2025-06-16T11:46:58.367254772Z level=info msg="Executing migration" id="add guid column to alert_rule table" grafana | logger=migrator t=2025-06-16T11:46:58.37729577Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=10.041188ms grafana | logger=migrator t=2025-06-16T11:46:58.381312077Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.388526207Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=7.21328ms grafana | logger=migrator t=2025-06-16T11:46:58.392900079Z level=info msg="Executing migration" id="cleanup alert_rule_version table" grafana | logger=migrator t=2025-06-16T11:46:58.39292724Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 grafana | logger=migrator t=2025-06-16T11:46:58.393140913Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 grafana | logger=migrator t=2025-06-16T11:46:58.393157374Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=257.595µs grafana | logger=migrator t=2025-06-16T11:46:58.403171721Z level=info msg="Executing migration" id="populate rule guid in alert rule table" grafana | logger=migrator t=2025-06-16T11:46:58.40376328Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=590.879µs grafana | logger=migrator t=2025-06-16T11:46:58.439298752Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2025-06-16T11:46:58.441061371Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.454661ms grafana | logger=migrator t=2025-06-16T11:46:58.444945766Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" grafana | logger=migrator t=2025-06-16T11:46:58.446147436Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=1.20117ms grafana | logger=migrator t=2025-06-16T11:46:58.449372139Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" grafana | logger=migrator t=2025-06-16T11:46:58.450502059Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=1.12965ms grafana | logger=migrator t=2025-06-16T11:46:58.454777269Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" grafana | logger=migrator t=2025-06-16T11:46:58.455877628Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=1.099779ms grafana | logger=migrator t=2025-06-16T11:46:58.459155763Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:58.470584483Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=11.42979ms grafana | logger=migrator t=2025-06-16T11:46:58.473946769Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:58.480945985Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=7.000256ms grafana | logger=migrator t=2025-06-16T11:46:58.484895251Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" grafana | logger=migrator t=2025-06-16T11:46:58.494613962Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=9.718311ms grafana | logger=migrator t=2025-06-16T11:46:58.497985139Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" grafana | logger=migrator t=2025-06-16T11:46:58.5077157Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=9.729991ms grafana | logger=migrator t=2025-06-16T11:46:58.511378441Z level=info msg="Executing migration" id="remove the datasources:drilldown action" grafana | logger=migrator t=2025-06-16T11:46:58.511551875Z level=info msg="Removed 0 datasources:drilldown permissions" grafana | logger=migrator t=2025-06-16T11:46:58.511566585Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=188.744µs grafana | logger=migrator t=2025-06-16T11:46:58.516759411Z level=info msg="Executing migration" id="remove title in folder unique index" grafana | logger=migrator t=2025-06-16T11:46:58.518290197Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=1.529176ms grafana | logger=migrator t=2025-06-16T11:46:58.523454713Z level=info msg="migrations completed" performed=654 skipped=0 duration=5.642875117s grafana | logger=migrator t=2025-06-16T11:46:58.524367718Z level=info msg="Unlocking database" grafana | logger=sqlstore t=2025-06-16T11:46:58.543911893Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2025-06-16T11:46:58.544114366Z level=info msg="Created default organization" grafana | logger=secrets t=2025-06-16T11:46:58.56230892Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T11:46:58.65068236Z level=info msg="Restored cache from database" duration=437.197µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.66021159Z level=info msg="Locking database" grafana | logger=resource-migrator t=2025-06-16T11:46:58.66023896Z level=info msg="Starting DB migrations" grafana | logger=resource-migrator t=2025-06-16T11:46:58.667650093Z level=info msg="Executing migration" id="create resource_migration_log table" grafana | logger=resource-migrator t=2025-06-16T11:46:58.668465047Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=814.494µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.677228413Z level=info msg="Executing migration" id="Initialize resource tables" grafana | logger=resource-migrator t=2025-06-16T11:46:58.677266594Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=40.781µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.682259836Z level=info msg="Executing migration" id="drop table resource" grafana | logger=resource-migrator t=2025-06-16T11:46:58.682394849Z level=info msg="Migration successfully executed" id="drop table resource" duration=135.743µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.686620209Z level=info msg="Executing migration" id="create table resource" grafana | logger=resource-migrator t=2025-06-16T11:46:58.687785718Z level=info msg="Migration successfully executed" id="create table resource" duration=1.165179ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.692963844Z level=info msg="Executing migration" id="create table resource, index: 0" grafana | logger=resource-migrator t=2025-06-16T11:46:58.694702183Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.736579ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.699403772Z level=info msg="Executing migration" id="drop table resource_history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.699521044Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=117.712µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.702188369Z level=info msg="Executing migration" id="create table resource_history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.703310087Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.121699ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.709311306Z level=info msg="Executing migration" id="create table resource_history, index: 0" grafana | logger=resource-migrator t=2025-06-16T11:46:58.710614609Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=1.302873ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.714456222Z level=info msg="Executing migration" id="create table resource_history, index: 1" grafana | logger=resource-migrator t=2025-06-16T11:46:58.715591211Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.129399ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.72035549Z level=info msg="Executing migration" id="drop table resource_version" grafana | logger=resource-migrator t=2025-06-16T11:46:58.720475602Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=120.572µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.725498006Z level=info msg="Executing migration" id="create table resource_version" grafana | logger=resource-migrator t=2025-06-16T11:46:58.726848058Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.349222ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.730991648Z level=info msg="Executing migration" id="create table resource_version, index: 0" grafana | logger=resource-migrator t=2025-06-16T11:46:58.732147656Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.153838ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.735463611Z level=info msg="Executing migration" id="drop table resource_blob" grafana | logger=resource-migrator t=2025-06-16T11:46:58.735537863Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=74.122µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.744430232Z level=info msg="Executing migration" id="create table resource_blob" grafana | logger=resource-migrator t=2025-06-16T11:46:58.746508275Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=2.079704ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.752146879Z level=info msg="Executing migration" id="create table resource_blob, index: 0" grafana | logger=resource-migrator t=2025-06-16T11:46:58.753491832Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=1.345873ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.759209767Z level=info msg="Executing migration" id="create table resource_blob, index: 1" grafana | logger=resource-migrator t=2025-06-16T11:46:58.761574397Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=2.363359ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.794035267Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.807899707Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.86547ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.811112461Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" grafana | logger=resource-migrator t=2025-06-16T11:46:58.820146602Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=9.033511ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.824685337Z level=info msg="Executing migration" id="Add index to resource_history for polling" grafana | logger=resource-migrator t=2025-06-16T11:46:58.825918078Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.232641ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.831548092Z level=info msg="Executing migration" id="Add index to resource for loading" grafana | logger=resource-migrator t=2025-06-16T11:46:58.832796442Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.247821ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.836101077Z level=info msg="Executing migration" id="Add column folder in resource_history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.846800005Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=10.698508ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.851509604Z level=info msg="Executing migration" id="Add column folder in resource" grafana | logger=resource-migrator t=2025-06-16T11:46:58.863355641Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=11.845517ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.867927837Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" grafana | logger=deletion-marker-migrator t=2025-06-16T11:46:58.867948548Z level=info msg="finding any deletion markers" grafana | logger=resource-migrator t=2025-06-16T11:46:58.868400295Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=471.528µs grafana | logger=resource-migrator t=2025-06-16T11:46:58.872736477Z level=info msg="Executing migration" id="Add index to resource_history for get trash" grafana | logger=resource-migrator t=2025-06-16T11:46:58.875687017Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.95104ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.879665632Z level=info msg="Executing migration" id="Add generation to resource history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.890796808Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=11.131666ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.905884229Z level=info msg="Executing migration" id="Add generation index to resource history" grafana | logger=resource-migrator t=2025-06-16T11:46:58.909128262Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=3.243543ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.915654482Z level=info msg="migrations completed" performed=26 skipped=0 duration=248.05252ms grafana | logger=resource-migrator t=2025-06-16T11:46:58.916290582Z level=info msg="Unlocking database" grafana | t=2025-06-16T11:46:58.916563957Z level=info caller=logger.go:214 time=2025-06-16T11:46:58.916534826Z msg="Using channel notifier" logger=sql-resource-server grafana | logger=plugin.store t=2025-06-16T11:46:58.926686855Z level=info msg="Loading plugins..." grafana | logger=plugins.registration t=2025-06-16T11:46:58.962744496Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugins.initialization t=2025-06-16T11:46:58.962772326Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" grafana | logger=plugin.store t=2025-06-16T11:46:58.962818947Z level=info msg="Plugins loaded" count=53 duration=36.133082ms grafana | logger=query_data t=2025-06-16T11:46:58.969166342Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2025-06-16T11:46:58.97380212Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-16T11:46:58.992675853Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 grafana | logger=ngalert t=2025-06-16T11:46:59.027030776Z level=info msg="Using simple database alert instance store" grafana | logger=ngalert.state.manager.persist t=2025-06-16T11:46:59.027084427Z level=info msg="Using sync state persister" grafana | logger=infra.usagestats.collector t=2025-06-16T11:46:59.031240755Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=grafanaStorageLogger t=2025-06-16T11:46:59.034224325Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2025-06-16T11:46:59.034587341Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2025-06-16T11:46:59.037041592Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.03868114Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=http.server t=2025-06-16T11:46:59.041546797Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.080299202Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=ngalert.state.manager t=2025-06-16T11:46:59.093344569Z level=info msg="State cache has been initialized" states=0 duration=58.757158ms grafana | logger=ngalert.scheduler t=2025-06-16T11:46:59.093386449Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 grafana | logger=ticker t=2025-06-16T11:46:59.09344925Z level=info msg=starting first_tick=2025-06-16T11:47:00Z grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.094491278Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=plugins.update.checker t=2025-06-16T11:46:59.128704307Z level=info msg="Update check succeeded" duration=92.719252ms grafana | logger=grafana.update.checker t=2025-06-16T11:46:59.131610726Z level=info msg="Update check succeeded" duration=96.779581ms grafana | logger=provisioning.datasources t=2025-06-16T11:46:59.164721017Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.186949347Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 grafana | logger=provisioning.alerting t=2025-06-16T11:46:59.194696575Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2025-06-16T11:46:59.194721075Z level=info msg="finished to provision alerting" grafana | logger=provisioning.dashboard t=2025-06-16T11:46:59.19678011Z level=info msg="starting to provision dashboards" grafana | logger=sqlstore.transactions t=2025-06-16T11:46:59.198409297Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-16T11:46:59.281920617Z level=info msg="Patterns update finished" duration=107.158274ms grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.462066354Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.467556216Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.468288848Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.468924348Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.469683641Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.471035883Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.473048957Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.475809423Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" grafana | logger=grafana-apiserver t=2025-06-16T11:46:59.47682018Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" grafana | logger=app-registry t=2025-06-16T11:46:59.540250185Z level=info msg="app registry initialized" grafana | logger=plugin.installer t=2025-06-16T11:46:59.54053232Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= grafana | logger=installer.fs t=2025-06-16T11:46:59.705839991Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" grafana | logger=plugins.registration t=2025-06-16T11:46:59.74424593Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.74427724Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=705.53971ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:46:59.744297551Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=plugin.installer t=2025-06-16T11:46:59.926167347Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= grafana | logger=installer.fs t=2025-06-16T11:46:59.990023789Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" grafana | logger=plugins.registration t=2025-06-16T11:47:00.006455043Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.006476763Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=262.174602ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.006496803Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=provisioning.dashboard t=2025-06-16T11:47:00.10608256Z level=info msg="finished to provision dashboards" grafana | logger=plugin.installer t=2025-06-16T11:47:00.19148191Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= grafana | logger=installer.fs t=2025-06-16T11:47:00.257435947Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" grafana | logger=plugins.registration t=2025-06-16T11:47:00.274283797Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.274310298Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=267.809035ms grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.274329918Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=plugin.installer t=2025-06-16T11:47:00.518243485Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= grafana | logger=installer.fs t=2025-06-16T11:47:00.586894166Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" grafana | logger=plugins.registration t=2025-06-16T11:47:00.60572012Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app grafana | logger=plugin.backgroundinstaller t=2025-06-16T11:47:00.6057439Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=331.407242ms grafana | logger=infra.usagestats t=2025-06-16T11:48:42.046022505Z level=info msg="Usage stats are ready to report" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2025-06-16 11:46:51,749] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,749] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,750] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,751] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,754] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,757] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 11:46:51,761] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 11:46:51,767] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:51,785] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:51,786] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:51,792] INFO Socket connection established, initiating session, client: /172.17.0.5:44528, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:51,831] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000273560000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:51,949] INFO Session: 0x100000273560000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:51,950] INFO EventThread shut down for session: 0x100000273560000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2025-06-16 11:46:52,569] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2025-06-16 11:46:52,862] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2025-06-16 11:46:52,937] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2025-06-16 11:46:52,938] INFO starting (kafka.server.KafkaServer) kafka | [2025-06-16 11:46:52,939] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2025-06-16 11:46:52,956] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 11:46:52,961] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,961] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,961] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,961] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,962] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,963] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,966] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) kafka | [2025-06-16 11:46:52,971] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2025-06-16 11:46:52,976] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:52,978] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 11:46:52,981] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:52,987] INFO Socket connection established, initiating session, client: /172.17.0.5:44530, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:52,996] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x100000273560001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2025-06-16 11:46:52,999] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2025-06-16 11:46:53,299] INFO Cluster ID = Y_BS0uSaQHW9oN2tPXU35A (kafka.server.KafkaServer) kafka | [2025-06-16 11:46:53,302] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2025-06-16 11:46:53,353] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.4-IV0 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2025-06-16 11:46:53,389] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 11:46:53,392] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 11:46:53,393] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 11:46:53,393] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2025-06-16 11:46:53,440] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2025-06-16 11:46:53,442] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) kafka | [2025-06-16 11:46:53,454] INFO Loaded 0 logs in 14ms. (kafka.log.LogManager) kafka | [2025-06-16 11:46:53,454] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2025-06-16 11:46:53,456] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2025-06-16 11:46:53,465] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2025-06-16 11:46:53,507] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) kafka | [2025-06-16 11:46:53,519] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2025-06-16 11:46:53,534] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 11:46:53,577] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 11:46:53,923] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 11:46:53,928] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 11:46:53,950] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2025-06-16 11:46:53,950] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2025-06-16 11:46:53,951] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2025-06-16 11:46:53,956] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2025-06-16 11:46:53,967] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 11:46:53,992] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:53,995] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:53,996] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:53,999] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:54,016] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2025-06-16 11:46:54,039] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2025-06-16 11:46:54,061] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1750074414053,1750074414053,1,0,0,72057604562878465,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2025-06-16 11:46:54,062] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 11:46:54,114] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2025-06-16 11:46:54,126] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:54,133] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:54,133] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:54,144] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2025-06-16 11:46:54,150] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:46:54,154] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,159] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,159] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:46:54,165] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2025-06-16 11:46:54,189] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 11:46:54,196] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2025-06-16 11:46:54,197] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2025-06-16 11:46:54,200] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2025-06-16 11:46:54,200] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,207] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,210] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,215] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,238] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,247] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,249] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2025-06-16 11:46:54,254] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2025-06-16 11:46:54,265] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2025-06-16 11:46:54,266] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,266] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,267] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,267] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,270] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,271] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2025-06-16 11:46:54,271] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,274] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2025-06-16 11:46:54,279] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 11:46:54,280] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 11:46:54,290] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 11:46:54,291] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2025-06-16 11:46:54,291] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 11:46:54,291] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 11:46:54,293] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2025-06-16 11:46:54,293] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,296] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,299] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,300] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,301] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,303] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2025-06-16 11:46:54,311] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:54,316] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2025-06-16 11:46:54,337] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 11:46:54,337] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 11:46:54,337] INFO Kafka startTimeMs: 1750074414330 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2025-06-16 11:46:54,340] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2025-06-16 11:46:54,374] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 11:46:54,381] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:46:54,398] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2025-06-16 11:46:59,312] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-16 11:46:59,313] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,417] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 11:47:27,421] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 11:47:27,430] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,435] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,451] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(1vUAlylBSMO4USo_S3aOEQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,451] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,453] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,453] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,457] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,457] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,484] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,491] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 11:47:27,492] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 11:47:27,495] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,495] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,495] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,499] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,500] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,516] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(qcqke507RcCh6aE31A-Zkw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,516] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2025-06-16 11:47:27,517] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,518] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,518] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,519] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,520] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,521] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,522] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,523] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 11:47:27,524] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,524] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:47:27,525] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,531] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:47:27,532] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,592] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,610] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,613] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,613] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,615] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(1vUAlylBSMO4USo_S3aOEQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,627] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,627] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,628] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,629] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,630] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,631] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,632] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,633] INFO [Broker id=1] Finished LeaderAndIsr request in 136ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:47:27,634] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 11:47:27,634] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 11:47:27,635] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=1vUAlylBSMO4USo_S3aOEQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 11:47:27,636] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 11:47:27,637] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 11:47:27,638] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 11:47:27,639] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 11:47:27,640] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 11:47:27,641] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 11:47:27,642] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 11:47:27,642] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,642] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 11:47:27,643] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2025-06-16 11:47:27,643] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,643] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,644] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,645] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,646] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,647] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,648] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,649] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,650] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,651] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,651] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,652] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,653] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,657] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:47:27,681] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 11:47:27,681] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 11:47:27,682] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 11:47:27,683] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 11:47:27,684] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 11:47:27,685] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 11:47:27,686] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 11:47:27,687] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 11:47:27,687] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) kafka | [2025-06-16 11:47:27,693] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,694] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,695] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,695] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,696] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,705] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,706] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,706] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,706] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,707] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,713] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,713] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,714] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,714] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,714] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,721] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,722] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,722] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,723] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,723] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,731] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,731] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,732] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,732] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,732] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,738] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,739] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,739] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,739] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,739] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,746] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,747] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,747] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,747] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,748] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,754] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,755] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,755] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,755] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,755] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,762] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,763] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,763] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,763] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,763] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,770] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,771] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,771] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,771] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,771] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,778] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,779] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,780] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,780] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,780] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,786] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,788] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,788] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,788] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,789] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,795] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,796] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,796] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,797] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,797] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,804] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,805] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,805] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,805] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,805] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,812] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,813] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,813] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,813] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,813] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,821] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,821] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,822] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,822] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,822] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,829] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,830] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,830] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,831] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,831] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,837] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,838] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,838] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,838] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,838] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,845] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,845] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,845] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,846] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,846] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,852] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,853] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,853] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,853] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,853] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,860] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,861] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,861] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,861] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,861] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,868] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,869] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,869] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,869] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,870] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,876] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,877] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,877] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,877] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,877] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,884] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,884] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,885] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,885] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,885] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,892] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,892] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,892] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,893] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,893] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,901] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,901] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,902] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,902] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,902] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,909] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,910] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,910] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,910] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,910] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,919] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,920] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,920] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,920] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,920] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,927] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,928] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,928] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,928] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,929] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,939] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,940] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,940] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,941] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,941] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,952] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,954] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,954] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,954] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,954] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,965] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,966] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,967] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,967] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,967] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,980] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,981] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,981] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,982] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,982] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,990] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:27,991] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:27,991] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,991] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:27,991] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:27,999] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,000] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,000] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,000] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,000] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,007] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,008] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,008] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,009] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,009] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,016] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,017] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,017] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,017] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,017] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,025] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,026] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,026] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,026] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,026] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,033] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,035] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,035] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,035] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,035] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,042] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,043] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,043] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,043] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,044] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,050] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,051] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,051] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,051] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,051] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,058] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,059] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,059] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,060] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,060] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,067] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,067] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,068] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,068] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,068] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,075] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,076] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,076] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,076] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,076] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,083] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,084] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,084] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,084] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,084] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,090] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,091] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,091] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,091] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,091] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,098] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,098] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,099] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,099] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,099] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,106] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,107] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,107] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,107] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,107] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,114] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,115] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,115] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,115] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,115] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,119] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:47:28,120] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2025-06-16 11:47:28,120] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,120] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:47:28,121] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(qcqke507RcCh6aE31A-Zkw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2025-06-16 11:47:28,125] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2025-06-16 11:47:28,126] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2025-06-16 11:47:28,127] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2025-06-16 11:47:28,129] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,130] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,137] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,137] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,138] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,138] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,139] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,139] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,140] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,140] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,141] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,142] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,142] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,143] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,143] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,144] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,144] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,145] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,146] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,146] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,147] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,147] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,148] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,149] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,150] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,151] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,152] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,153] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2025-06-16 11:47:28,153] INFO [Broker id=1] Finished LeaderAndIsr request in 506ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2025-06-16 11:47:28,155] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=qcqke507RcCh6aE31A-Zkw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,157] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,158] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,159] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,160] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,160] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2025-06-16 11:47:28,160] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:47:28,306] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 in Empty state. Created a new member id consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:28,325] INFO [GroupCoordinator 1]: Preparing to rebalance group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 with group instance id None; client reason: need to re-join with the given member-id: consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:29,060] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:29,063] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:31,338] INFO [GroupCoordinator 1]: Stabilized group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 generation 1 (__consumer_offsets-0) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:31,363] INFO [GroupCoordinator 1]: Assignment received from leader consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 for group 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:32,065] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:47:32,071] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:48:12,257] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:48:12,258] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:48:15,260] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:48:15,264] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-cdaa1c56-4335-4672-8bd0-f20246542e73 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:49:22,961] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2025-06-16 11:49:22,974] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(6jNsCB5yTgmHWeOqbVmcTg),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2025-06-16 11:49:22,974] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,975] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2025-06-16 11:49:22,975] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,986] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2025-06-16 11:49:22,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 11:49:22,986] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2025-06-16 11:49:22,987] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,987] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2025-06-16 11:49:22,987] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,988] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 5 from controller 1 epoch 1 (state.change.logger) kafka | [2025-06-16 11:49:22,989] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 5 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 11:49:22,989] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) kafka | [2025-06-16 11:49:22,989] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 5 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,992] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2025-06-16 11:49:22,993] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) kafka | [2025-06-16 11:49:22,994] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) kafka | [2025-06-16 11:49:22,995] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2025-06-16 11:49:22,995] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(6jNsCB5yTgmHWeOqbVmcTg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) kafka | [2025-06-16 11:49:22,998] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 5 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) kafka | [2025-06-16 11:49:22,999] INFO [Broker id=1] Finished LeaderAndIsr request in 11ms correlationId 5 from controller 1 for 1 partitions (state.change.logger) kafka | [2025-06-16 11:49:22,999] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=6jNsCB5yTgmHWeOqbVmcTg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 5 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:49:23,000] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-16 11:49:23,001] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 6 (state.change.logger) kafka | [2025-06-16 11:49:23,001] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 6 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2025-06-16 11:51:01,349] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:01,350] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:04,352] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:04,356] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:04,471] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:04,472] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:04,473] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-cb65e595-cf1c-4820-9b6b-ca0c489c347d, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:26,957] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:26,958] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:29,958] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:29,961] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:29,967] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:29,967] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:29,968] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-1c6019f7-850e-4324-be4f-39b8f1ba3d9b, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:52,395] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:52,396] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:55,398] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:55,401] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:55,407] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:55,407] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:55,408] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-81f1f70b-17ed-42b8-9a2a-2cde6622fc53, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) kafka | [2025-06-16 11:51:59,316] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2025-06-16 11:51:59,317] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2025-06-16 11:51:59,322] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) kafka | [2025-06-16 11:51:59,323] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | policy-api | :: Spring Boot :: (v3.4.6) policy-api | policy-api | [2025-06-16T11:47:06.739+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final policy-api | [2025-06-16T11:47:06.800+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 39 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2025-06-16T11:47:06.801+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" policy-api | [2025-06-16T11:47:08.176+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2025-06-16T11:47:08.351+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 165 ms. Found 6 JPA repository interfaces. policy-api | [2025-06-16T11:47:08.990+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-api | [2025-06-16T11:47:09.003+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T11:47:09.005+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2025-06-16T11:47:09.005+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-api | [2025-06-16T11:47:09.042+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2025-06-16T11:47:09.042+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2182 ms policy-api | [2025-06-16T11:47:09.358+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2025-06-16T11:47:09.434+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-api | [2025-06-16T11:47:09.479+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2025-06-16T11:47:09.858+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2025-06-16T11:47:09.901+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2025-06-16T11:47:10.102+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@5342032a policy-api | [2025-06-16T11:47:10.104+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2025-06-16T11:47:10.187+00:00|INFO|pooling|main] HHH10001005: Database info: policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-api | Database driver: undefined/unknown policy-api | Database version: 16.4 policy-api | Autocommit mode: undefined/unknown policy-api | Isolation level: undefined/unknown policy-api | Minimum pool size: undefined/unknown policy-api | Maximum pool size: undefined/unknown policy-api | [2025-06-16T11:47:12.253+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2025-06-16T11:47:12.257+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2025-06-16T11:47:12.874+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2025-06-16T11:47:13.724+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2025-06-16T11:47:14.779+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2025-06-16T11:47:14.823+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-api | [2025-06-16T11:47:15.461+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-api | [2025-06-16T11:47:15.591+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2025-06-16T11:47:15.609+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' policy-api | [2025-06-16T11:47:15.634+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 9.553 seconds (process running for 10.149) policy-api | [2025-06-16T11:47:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2025-06-16T11:47:39.916+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2025-06-16T11:47:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2025-06-16T11:50:39.142+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: policy-api | [] policy-api | [2025-06-16T11:51:55.731+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity policy-api | policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot policy-csit | Run Robot test policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates policy-csit | -v POLICY_API_IP:policy-api:6969 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 policy-csit | -v APEX_IP:policy-apex-pdp:6969 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 policy-csit | -v KAFKA_IP:kafka:9092 policy-csit | -v PROMETHEUS_IP:prometheus:9090 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 policy-csit | -v TEMP_FOLDER:/tmp/distribution policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 policy-csit | -v TEST_ENV:docker policy-csit | -v JAEGER_IP:jaeger:16686 policy-csit | Starting Robot test suites ... policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test policy-csit | ============================================================================== policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateDataBeforePolicyDeployment | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesZonePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesVehiclePolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidatesAbacPolicy | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas policy-csit | ============================================================================== policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | policy-csit | ------------------------------------------------------------------------------ policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | policy-csit | 5 tests, 5 passed, 0 failed policy-csit | ============================================================================== policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | policy-csit | 10 tests, 10 passed, 0 failed policy-csit | ============================================================================== policy-csit | Output: /tmp/results/output.xml policy-csit | Log: /tmp/results/log.html policy-csit | Report: /tmp/results/report.html policy-csit | RESULT: 0 policy-db-migrator | Waiting for postgres port 5432... policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! policy-db-migrator | Initializing policyadmin... policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | msg policy-db-migrator | --------------------------- policy-db-migrator | upgrade to 1100 completed policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | DROP INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | DROP TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------+--------- policy-db-migrator | policyadmin | 1300 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.804089 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.845331 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.886651 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:52.958198 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.00373 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.058252 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.112268 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.185732 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.233613 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.320445 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.371372 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.442722 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.496885 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.570912 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.622922 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.679369 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.733754 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.798204 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.845993 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.910867 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.950871 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:53.99255 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.052347 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.098895 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.155127 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.2059 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.262882 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.324801 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.379258 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.435411 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.509748 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.561559 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.64799 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.700869 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.791078 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.839196 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.922784 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:54.973951 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.024767 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.081791 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.150603 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.207176 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.283362 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.336493 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.417245 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.473243 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.535113 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.593104 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.657836 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.713272 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.783084 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.830891 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.900392 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:55.951083 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.028323 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.078084 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.144151 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.188981 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.238844 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.290598 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.349564 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.418835 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.46943 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.597282 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.676026 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.728496 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.802918 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.853849 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.928032 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:56.97744 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.056509 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.11757 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.192459 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.245755 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.2925 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.339619 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.401154 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.454113 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.505223 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.55323 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.603128 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.650231 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.703632 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.754283 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.807965 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.864591 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:57.916327 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.002321 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.049299 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.128332 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.178779 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.236366 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.288294 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.348401 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.398403 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1606251146520800u | 1 | 2025-06-16 11:46:58.483457 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.536913 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.611596 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.661292 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.715382 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.774838 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.845094 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.893444 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:58.951206 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.001141 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.084426 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.135917 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.185963 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1606251146520900u | 1 | 2025-06-16 11:46:59.233153 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.298947 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.351414 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.425094 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.484338 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.542888 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.596392 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.647081 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.699506 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1606251146521000u | 1 | 2025-06-16 11:46:59.767157 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1606251146521100u | 1 | 2025-06-16 11:46:59.809486 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.859487 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.916221 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:46:59.967936 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1606251146521200u | 1 | 2025-06-16 11:47:00.025724 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.080355 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.151224 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1606251146521300u | 1 | 2025-06-16 11:47:00.198061 policy-db-migrator | (126 rows) policy-db-migrator | policy-db-migrator | policyadmin: OK @ 1300 policy-db-migrator | Initializing clampacm... policy-db-migrator | 97 blocks policy-db-migrator | Preparing upgrade release version: 1400 policy-db-migrator | Preparing upgrade release version: 1500 policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Preparing upgrade release version: 1601 policy-db-migrator | Preparing upgrade release version: 1700 policy-db-migrator | Preparing upgrade release version: 1701 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | clampacm: upgrade available: 0 -> 1701 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1701 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-nodetemplatestate.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participantsupportedelements.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-participantreplica.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-participant.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-participant_replica_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-message.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-messagejob.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0200-automationcomposition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0300-automationcompositionelement.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0600-nodetemplatestate.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0800-participantreplica.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql policy-db-migrator | UPDATE 0 policy-db-migrator | UPDATE 0 policy-db-migrator | ALTER TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | clampacm: OK: upgrade (1701) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ----------+--------- policy-db-migrator | clampacm | 1701 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:00.933219 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:00.991952 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.047219 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.103356 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.186256 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.239655 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.295668 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.349976 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.417444 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.468193 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.538769 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.582087 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1606251147001400u | 1 | 2025-06-16 11:47:01.676816 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.726892 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.770172 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.830385 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.905509 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:01.958798 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.01427 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.062863 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1606251147001500u | 1 | 2025-06-16 11:47:02.110017 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1606251147001600u | 1 | 2025-06-16 11:47:02.164169 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1606251147001600u | 1 | 2025-06-16 11:47:02.213874 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1606251147001601u | 1 | 2025-06-16 11:47:02.286402 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1606251147001601u | 1 | 2025-06-16 11:47:02.337037 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.412557 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.469013 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1606251147001700u | 1 | 2025-06-16 11:47:02.534024 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.591983 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.674877 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.724821 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.796468 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.851806 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.92591 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:02.97929 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:03.030606 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1606251147001701u | 1 | 2025-06-16 11:47:03.078821 policy-db-migrator | (37 rows) policy-db-migrator | policy-db-migrator | clampacm: OK @ 1701 policy-db-migrator | Initializing pooling... policy-db-migrator | 4 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | pooling: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-distributed.locking.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | pooling: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | ---------+--------- policy-db-migrator | pooling | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1606251147031600u | 1 | 2025-06-16 11:47:03.758827 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | pooling: OK @ 1600 policy-db-migrator | Initializing operationshistory... policy-db-migrator | 6 blocks policy-db-migrator | Preparing upgrade release version: 1600 policy-db-migrator | Done policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 0 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- policy-db-migrator | (0 rows) policy-db-migrator | policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | upgrade: 0 -> 1600 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql policy-db-migrator | CREATE TABLE policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | rc=0 policy-db-migrator | policy-db-migrator | > upgrade 0110-operationshistory.sql policy-db-migrator | CREATE TABLE policy-db-migrator | CREATE INDEX policy-db-migrator | CREATE INDEX policy-db-migrator | INSERT 0 1 policy-db-migrator | INSERT 0 1 policy-db-migrator | operationshistory: OK: upgrade (1600) policy-db-migrator | List of databases policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + policy-db-migrator | | | | | | | | | postgres=CTc/postgres policy-db-migrator | (9 rows) policy-db-migrator | policy-db-migrator | CREATE TABLE policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping policy-db-migrator | CREATE TABLE policy-db-migrator | name | version policy-db-migrator | -------------------+--------- policy-db-migrator | operationshistory | 1600 policy-db-migrator | (1 row) policy-db-migrator | policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1606251147041600u | 1 | 2025-06-16 11:47:04.395942 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1606251147041600u | 1 | 2025-06-16 11:47:04.454825 policy-db-migrator | (2 rows) policy-db-migrator | policy-db-migrator | operationshistory: OK @ 1600 policy-opa-pdp | Waiting for kafka port 9092... policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to kafka (172.17.0.5) port 9092 (tcp) failed: Connection refused policy-opa-pdp | Connection to kafka (172.17.0.5) 9092 port [tcp/*] succeeded! policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Waiting for pap port 6969... policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="OPA-PDP: Starting initialisation " policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="###################################### " policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="KAFKA_URL not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PAP_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PATCH_TOPIC not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="PATCH_GROUPID not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="API_USER not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="API_PASSWORD not defined, using default value" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="UseSASLForKAFKA not defined, using default value" policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Username: " policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Password: " policy-opa-pdp | time="2025-06-16T11:48:07Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" policy-opa-pdp | time="2025-06-16T11:48:07Z" level=debug msg="Configuration module: environment initialised" policy-opa-pdp | DEBU[2025-06-16T11:48:07.2317+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug policy-opa-pdp | DEBU[2025-06-16T11:48:07.2319+00:00] Name: opa-7f657737-d4a9-439c-8bcc-1ec79cd614af policy-opa-pdp | DEBU[2025-06-16T11:48:07.2352+00:00] Starting OPA PDP Service policy-opa-pdp | INFO[2025-06-16T11:48:12.2358+00:00] HTTP server started policy-opa-pdp | DEBU[2025-06-16T11:48:12.2368+00:00] Create an instance of OPA Object policy-opa-pdp | DEBU[2025-06-16T11:48:12.2369+00:00] Configure an instance of OPA Object policy-opa-pdp | DEBU[2025-06-16T11:48:12.2380+00:00] Topic start :::: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-16T11:48:12.2380+00:00] Creating Kafka Consumer singleton instance policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-16T11:48:12.2402+00:00] Topic Subscribed: policy-pdp-pap policy-opa-pdp | DEBU[2025-06-16T11:48:12.2402+00:00] Created SIngleton consumer instance policy-opa-pdp | DEBU[2025-06-16T11:48:12.2516+00:00] Starting PDP Message Listener..... policy-opa-pdp | DEBU[2025-06-16T11:48:22.2521+00:00] New Ticker started with interval 60000 policy-opa-pdp | DEBU[2025-06-16T11:48:32.2602+00:00] After registration successful delay policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:49:22.2531+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:49:22.2532+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-16T11:49:22.2812+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:49:22.2813+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:22.2813+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:22.8919+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:49:22.8928+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] Policy Is Allowed: slice.capacity.check policy-opa-pdp | DEBU[2025-06-16T11:49:22.8932+00:00] Validating properties data for policy: slice.capacity.check policy-opa-pdp | DEBU[2025-06-16T11:49:22.8934+00:00] Validating properties policy for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-16T11:49:22.8934+00:00] Validation successful for policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-16T11:49:22.8940+00:00] Directory created: /opt/policies/slice/capacity/check policy-opa-pdp | INFO[2025-06-16T11:49:22.8941+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego policy-opa-pdp | INFO[2025-06-16T11:49:22.8944+00:00] Directory created: /opt/data/node/slice/capacity/check policy-opa-pdp | INFO[2025-06-16T11:49:22.8944+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json policy-opa-pdp | DEBU[2025-06-16T11:49:22.8944+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-16T11:49:22.9134+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-16T11:49:22.9162+00:00] storage not found creating : /node policy-opa-pdp | DEBU[2025-06-16T11:49:22.9163+00:00] storage not found creating : /node/slice policy-opa-pdp | DEBU[2025-06-16T11:49:22.9164+00:00] storage not found creating : /node/slice/capacity policy-opa-pdp | DEBU[2025-06-16T11:49:22.9165+00:00] storage not found creating : /node/slice/capacity/check policy-opa-pdp | INFO[2025-06-16T11:49:22.9167+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:49:22.9168+00:00] Loaded Policy: slice.capacity.check policy-opa-pdp | INFO[2025-06-16T11:49:22.9170+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-16T11:49:22.9171+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:49:22.9174+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:49:22.9175+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:49:22.9176+00:00] 120000 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9178+00:00] New Ticker started with interval 120000 policy-opa-pdp | DEBU[2025-06-16T11:49:22.9268+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:49:22.9270+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:22.9272+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:22.9571+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:49:22.9573+00:00] messageType: PDP_STATE_CHANGE policy-opa-pdp | DEBU[2025-06-16T11:49:22.9575+00:00] PDP STATE CHANGE message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:49:22.9576+00:00] State change from PASSIVE To : ACTIVE policy-opa-pdp | INFO[2025-06-16T11:49:22.9577+00:00] Sending PDP Status With State Change response policy-opa-pdp | 2025/06/16 11:49:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:49:22.9580+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:49:22.9581+00:00] PDP_STATUS With State Change Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:49:22.9658+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:49:22.9659+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:22.9659+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:23.2338+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:49:23.2340+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:49:23.2344+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-16T11:49:23.2346+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:49:23 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:49:23.2349+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:49:23.2350+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:49:23.2351+00:00] 120000 policy-opa-pdp | DEBU[2025-06-16T11:49:23.2424+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:49:23.2426+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:49:23.2427+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/16 11:50:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:50:22.2533+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:50:22.2534+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-16T11:50:22.2626+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:50:22.2627+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:50:22.2627+00:00] discarding event of type PDP_STATUS policy-opa-pdp | WARN[2025-06-16T11:50:38.9323+00:00] Invalid or Missing Request ID policy-opa-pdp | DEBU[2025-06-16T11:50:38.9324+00:00] Received Health Check message policy-opa-pdp | INFO[2025-06-16T11:50:38.9393+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:50:38.9394+00:00] datapath to get Data : / policy-opa-pdp | DEBU[2025-06-16T11:50:38.9396+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} policy-opa-pdp | DEBU[2025-06-16T11:50:40.3110+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:50:40.3113+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:50:40.3116+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:50:40.3117+00:00] Policy is new and should be deployed: zoneB 1.0.6 policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Policy Is Allowed: zoneB policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Validating properties data for policy: zoneB policy-opa-pdp | DEBU[2025-06-16T11:50:40.3117+00:00] Validating properties policy for policy: zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3117+00:00] Validation successful for policy: zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Directory created: /opt/policies/zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Policy file saved: /opt/policies/zoneB/policy.rego policy-opa-pdp | INFO[2025-06-16T11:50:40.3119+00:00] Directory created: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3120+00:00] Data file saved: /opt/data/node/zoneB/data.json policy-opa-pdp | DEBU[2025-06-16T11:50:40.3120+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-16T11:50:40.3376+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-16T11:50:40.3442+00:00] storage not found creating : /node/zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3444+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "zoneB" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "zoneB", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:50:40.3444+00:00] Loaded Policy: zoneB policy-opa-pdp | INFO[2025-06-16T11:50:40.3445+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-16T11:50:40.3445+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-16T11:50:40.3446+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:50:40.3446+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:50:40.3446+00:00] 0 policy-opa-pdp | 2025/06/16 11:50:40 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:50:40.3524+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:50:40.3525+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:50:40.3525+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-16T11:51:04.4996+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:04.4997+00:00] datapath to get Data : /node/zoneB/zone policy-opa-pdp | DEBU[2025-06-16T11:51:04.4998+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} policy-opa-pdp | DEBU[2025-06-16T11:51:04.5099+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:04.5100+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:04.5104+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:04.5105+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"aa050755-2cd8-465c-837f-0da821e38d6e","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":930,"timer_rego_query_compile_ns":151243,"timer_rego_query_eval_ns":542010,"timer_rego_query_parse_ns":104322,"timer_sdk_decision_eval_ns":1019410},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T11:51:04Z","timestamp":"2025-06-16T11:51:04.510601634Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:04.5123+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "aa050755-2cd8-465c-837f-0da821e38d6e", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:04.5248+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:04.5249+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:04.5252+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-16T11:51:04.5253+00:00] Policy Name zoeB does not exist policy-opa-pdp | DEBU[2025-06-16T11:51:04.5319+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:04.5319+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:04.5322+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:04.5322+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"d2ba74e2-60ce-4d1b-9928-429147f823f3","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":720,"timer_rego_query_eval_ns":435760,"timer_sdk_decision_eval_ns":547622},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-16T11:51:04Z","timestamp":"2025-06-16T11:51:04.532313602Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:04.5331+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "d2ba74e2-60ce-4d1b-9928-429147f823f3", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_log_view": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "has_zone_access": [ policy-opa-pdp | { policy-opa-pdp | "access": "granted", policy-opa-pdp | "user": "user1" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:04.8246+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:04.8247+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:51:04.8249+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-16T11:51:04.8250+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-16T11:51:04.8250+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-16T11:51:04.8251+00:00] Deleting Policy from OPA : /zoneB policy-opa-pdp | DEBU[2025-06-16T11:51:04.8269+00:00] Removing policy directory: /opt/policies/zoneB policy-opa-pdp | DEBU[2025-06-16T11:51:04.8271+00:00] Deleting data from OPA : /node/zoneB policy-opa-pdp | DEBU[2025-06-16T11:51:04.8272+00:00] Analyzing dataPath: /node/zoneB policy-opa-pdp | DEBU[2025-06-16T11:51:04.8273+00:00] Path segments: [ node zoneB] policy-opa-pdp | DEBU[2025-06-16T11:51:04.8273+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB policy-opa-pdp | DEBU[2025-06-16T11:51:04.8274+00:00] Removing data directory: /opt/data/node/zoneB policy-opa-pdp | INFO[2025-06-16T11:51:04.8276+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:04.8276+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:51:04.8277+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-16T11:51:04.8278+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:51:04 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:51:04.8279+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:51:04.8280+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:51:04.8280+00:00] 0 policy-opa-pdp | DEBU[2025-06-16T11:51:04.8352+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:04.8355+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:04.8355+00:00] discarding event of type PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:05.9180+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:05.9182+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:51:05.9184+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:05.9185+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:51:05.9186+00:00] Policy is new and should be deployed: vehicle 1.0.6 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9186+00:00] Policy Is Allowed: vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:05.9187+00:00] Validating properties data for policy: vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:05.9188+00:00] Validating properties policy for policy: vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9188+00:00] Validation successful for policy: vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9190+00:00] Directory created: /opt/policies/vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9192+00:00] Policy file saved: /opt/policies/vehicle/policy.rego policy-opa-pdp | INFO[2025-06-16T11:51:05.9193+00:00] Directory created: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9194+00:00] Data file saved: /opt/data/node/vehicle/data.json policy-opa-pdp | DEBU[2025-06-16T11:51:05.9194+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-16T11:51:05.9455+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-16T11:51:05.9511+00:00] storage not found creating : /node/vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9512+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "vehicle" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "vehicle", policy-opa-pdp | "policy-version": "1.0.6" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:05.9513+00:00] Loaded Policy: vehicle policy-opa-pdp | INFO[2025-06-16T11:51:05.9513+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-16T11:51:05.9513+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:51:05 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:51:05.9514+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:51:05.9514+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:51:05.9514+00:00] 0 policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:05.9591+00:00] discarding event of type PDP_STATUS policy-opa-pdp | 2025/06/16 11:51:22 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:51:22.9189+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:22.9191+00:00] Sending Heartbeat ... policy-opa-pdp | DEBU[2025-06-16T11:51:22.9268+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:22.9278+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:22.9278+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-16T11:51:29.9860+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:29.9861+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:29.9861+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-16T11:51:29.9973+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-16T11:51:29.9978+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-16T11:51:29.9979+00:00] data : [map[op:add path:/round value:trail]] policy-opa-pdp | INFO[2025-06-16T11:51:29.9979+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:29.9979+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-16T11:51:29.9980+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-16T11:51:29.9983+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] path : round policy-opa-pdp | INFO[2025-06-16T11:51:29.9984+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-16T11:51:29.9984+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-16T11:51:29.9985+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-16T11:51:30.0058+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.0059+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0060+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-16T11:51:30.0153+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.0157+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-16T11:51:30.0158+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] policy-opa-pdp | INFO[2025-06-16T11:51:30.0158+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0160+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-16T11:51:30.0160+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-16T11:51:30.0161+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-16T11:51:30.0162+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0162+00:00] path : round policy-opa-pdp | INFO[2025-06-16T11:51:30.0163+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-16T11:51:30.0165+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-16T11:51:30.0166+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-16T11:51:30.0231+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.0231+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0233+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | INFO[2025-06-16T11:51:30.0328+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.0333+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-16T11:51:30.0333+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-16T11:51:30.0333+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0335+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] policy-opa-pdp | DEBU[2025-06-16T11:51:30.0335+00:00] dirParts : [ node vehicle] policy-opa-pdp | INFO[2025-06-16T11:51:30.0337+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} policy-opa-pdp | DEBU[2025-06-16T11:51:30.0338+00:00] root: /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0339+00:00] path : round policy-opa-pdp | INFO[2025-06-16T11:51:30.0340+00:00] calling ParsePatchPathEscaped to check the path policy-opa-pdp | DEBU[2025-06-16T11:51:30.0341+00:00] No path conflicts detected policy-opa-pdp | INFO[2025-06-16T11:51:30.0343+00:00] Updated the data in the corresponding path successfully policy-opa-pdp | INFO[2025-06-16T11:51:30.0406+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.0407+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.0408+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} policy-opa-pdp | DEBU[2025-06-16T11:51:30.0498+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:30.0498+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:30.0502+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:30.0503+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"73c2af41-a1a3-424f-8114-190e065ca726","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":770,"timer_rego_query_compile_ns":138783,"timer_rego_query_eval_ns":462099,"timer_rego_query_parse_ns":116452,"timer_sdk_decision_eval_ns":1011009},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T11:51:30Z","timestamp":"2025-06-16T11:51:30.050541644Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:30.0519+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "73c2af41-a1a3-424f-8114-190e065ca726", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:30.0590+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:30.0591+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:30.0594+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-16T11:51:30.0596+00:00] Policy Name vehile does not exist policy-opa-pdp | DEBU[2025-06-16T11:51:30.0679+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:30.0680+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:30.0684+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:30.0685+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"f5b941d3-e5fb-49e6-861f-1c93b68ee8a5","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":990,"timer_rego_query_eval_ns":438378,"timer_sdk_decision_eval_ns":617431},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-16T11:51:30Z","timestamp":"2025-06-16T11:51:30.068750787Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:30.0695+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "f5b941d3-e5fb-49e6-861f-1c93b68ee8a5", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_granted": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "user_has_vehicle_access": [ policy-opa-pdp | { policy-opa-pdp | "status": "available", policy-opa-pdp | "type": "car" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:30.3073+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:30.3074+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:51:30.3079+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-16T11:51:30.3079+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-16T11:51:30.3079+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment policy-opa-pdp | DEBU[2025-06-16T11:51:30.3080+00:00] Deleting Policy from OPA : /vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.3105+00:00] Removing policy directory: /opt/policies/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Deleting data from OPA : /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Analyzing dataPath: /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Path segments: [ node vehicle] policy-opa-pdp | DEBU[2025-06-16T11:51:30.3108+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.3109+00:00] Removing data directory: /opt/data/node/vehicle policy-opa-pdp | INFO[2025-06-16T11:51:30.3111+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:30.3111+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:51:30.3113+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | INFO[2025-06-16T11:51:30.3114+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:51:30 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:51:30.3115+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:51:30.3115+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:51:30.3116+00:00] 0 policy-opa-pdp | DEBU[2025-06-16T11:51:30.3191+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:30.3192+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:30.3192+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-16T11:51:30.6900+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.6901+00:00] datapath to get Data : /node/vehicle policy-opa-pdp | WARN[2025-06-16T11:51:30.6901+00:00] Error in reading data under /node/vehicle path policy-opa-pdp | ERRO[2025-06-16T11:51:30.6903+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist policy-opa-pdp | INFO[2025-06-16T11:51:30.7003+00:00] PDP received a request to update data through API policy-opa-pdp | DEBU[2025-06-16T11:51:30.7005+00:00] All fields are valid! policy-opa-pdp | INFO[2025-06-16T11:51:30.7006+00:00] data : [map[op:remove path:/round]] policy-opa-pdp | INFO[2025-06-16T11:51:30.7006+00:00] policy name : vehicle policy-opa-pdp | DEBU[2025-06-16T11:51:30.7006+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] policy-opa-pdp | ERRO[2025-06-16T11:51:30.7007+00:00] Policy associated with the patch request does not exists policy-opa-pdp | DEBU[2025-06-16T11:51:31.3605+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:31.3608+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:51:31.3610+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:31.3610+00:00] Check if Policy is Already Deployed: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:51:31.3611+00:00] Policy is new and should be deployed: abac 1.0.7 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Policy Is Allowed: abac policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Validating properties data for policy: abac policy-opa-pdp | DEBU[2025-06-16T11:51:31.3612+00:00] Validating properties policy for policy: abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3612+00:00] Validation successful for policy: abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3615+00:00] Directory created: /opt/policies/abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3616+00:00] Policy file saved: /opt/policies/abac/policy.rego policy-opa-pdp | INFO[2025-06-16T11:51:31.3617+00:00] Directory created: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3618+00:00] Data file saved: /opt/data/node/abac/data.json policy-opa-pdp | DEBU[2025-06-16T11:51:31.3619+00:00] Before calling combinedoutput policy-opa-pdp | DEBU[2025-06-16T11:51:31.3779+00:00] Bundle Built Sucessfully.... policy-opa-pdp | DEBU[2025-06-16T11:51:31.3835+00:00] storage not found creating : /node/abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3838+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.abac" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "abac" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "abac", policy-opa-pdp | "policy-version": "1.0.7" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:31.3838+00:00] Loaded Policy: abac policy-opa-pdp | INFO[2025-06-16T11:51:31.3839+00:00] Processed policies_to_be_deployed successfully policy-opa-pdp | INFO[2025-06-16T11:51:31.3840+00:00] Sending PDP Status With Update Response policy-opa-pdp | 2025/06/16 11:51:31 KafkaProducer or producer produce message policy-opa-pdp | DEBU[2025-06-16T11:51:31.3841+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:51:31.3842+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:51:31.3843+00:00] 0 policy-opa-pdp | DEBU[2025-06-16T11:51:31.3912+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:31.3913+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:31.3913+00:00] discarding event of type PDP_STATUS policy-opa-pdp | INFO[2025-06-16T11:51:55.4302+00:00] PDP received a request to get data through API policy-opa-pdp | DEBU[2025-06-16T11:51:55.4303+00:00] datapath to get Data : /node/abac policy-opa-pdp | DEBU[2025-06-16T11:51:55.4305+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} policy-opa-pdp | DEBU[2025-06-16T11:51:55.4406+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:55.4407+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:55.4410+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:55.4411+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"0c9b3ab6-7c7e-43b1-9ad5-3430e944419f","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":1130,"timer_rego_query_compile_ns":220034,"timer_rego_query_eval_ns":1332106,"timer_rego_query_parse_ns":126133,"timer_sdk_decision_eval_ns":1909667},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T11:51:55Z","timestamp":"2025-06-16T11:51:55.441190716Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:55.4439+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "0c9b3ab6-7c7e-43b1-9ad5-3430e944419f", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:55.4515+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:55.4517+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:55.4521+00:00] Validation successful for request fields policy-opa-pdp | WARN[2025-06-16T11:51:55.4523+00:00] Policy Name abc does not exist policy-opa-pdp | DEBU[2025-06-16T11:51:55.4598+00:00] PDP received a decision request. policy-opa-pdp | DEBU[2025-06-16T11:51:55.4598+00:00] Headers processed for requestId: Unknown policy-opa-pdp | DEBU[2025-06-16T11:51:55.4601+00:00] Validation successful for request fields policy-opa-pdp | DEBU[2025-06-16T11:51:55.4602+00:00] SDK making a decision policy-opa-pdp | {"decision_id":"130ce0d0-f0d7-43f1-8214-b9f4193258db","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"b3e0f5ee-aba4-471b-98f7-50d818a1aae4","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":900,"timer_rego_query_eval_ns":884346,"timer_sdk_decision_eval_ns":997329},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-16T11:51:55Z","timestamp":"2025-06-16T11:51:55.460292991Z","type":"openpolicyagent.org/decision_logs"} policy-opa-pdp | DEBU[2025-06-16T11:51:55.4616+00:00] RAW opa Decision output: policy-opa-pdp | { policy-opa-pdp | "ID": "130ce0d0-f0d7-43f1-8214-b9f4193258db", policy-opa-pdp | "Result": { policy-opa-pdp | "action_is_read": true, policy-opa-pdp | "allow": true, policy-opa-pdp | "viewable_sensor_data": [ policy-opa-pdp | { policy-opa-pdp | "location": "Galle", policy-opa-pdp | "precipitation": "500 mm", policy-opa-pdp | "temperature": "35 C", policy-opa-pdp | "windspeed": "7.2 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Jaffna", policy-opa-pdp | "precipitation": "300 mm", policy-opa-pdp | "temperature": "-5 C", policy-opa-pdp | "windspeed": "3.8 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Nuwara Eliya", policy-opa-pdp | "precipitation": "600 mm", policy-opa-pdp | "temperature": "25 C", policy-opa-pdp | "windspeed": "4.0 m/s" policy-opa-pdp | }, policy-opa-pdp | { policy-opa-pdp | "location": "Trincomalee", policy-opa-pdp | "precipitation": "1000 mm", policy-opa-pdp | "temperature": "20 C", policy-opa-pdp | "windspeed": "5.0 m/s" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | }, policy-opa-pdp | "Provenance": { policy-opa-pdp | "version": "1.1.0", policy-opa-pdp | "build_commit": "", policy-opa-pdp | "build_timestamp": "", policy-opa-pdp | "build_hostname": "" policy-opa-pdp | } policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:55.9965+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | DEBU[2025-06-16T11:51:55.9966+00:00] messageType: PDP_UPDATE policy-opa-pdp | DEBU[2025-06-16T11:51:55.9968+00:00] PDP_UPDATE Message received: {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-opa-pdp | INFO[2025-06-16T11:51:55.9968+00:00] Found Policies to be undeployed policy-opa-pdp | INFO[2025-06-16T11:51:55.9968+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment policy-opa-pdp | DEBU[2025-06-16T11:51:55.9969+00:00] Deleting Policy from OPA : /abac policy-opa-pdp | DEBU[2025-06-16T11:51:55.9993+00:00] Removing policy directory: /opt/policies/abac policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Deleting data from OPA : /node/abac policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Analyzing dataPath: /node/abac policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Path segments: [ node abac] policy-opa-pdp | DEBU[2025-06-16T11:51:55.9998+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac policy-opa-pdp | DEBU[2025-06-16T11:51:56.0000+00:00] Removing data directory: /opt/data/node/abac policy-opa-pdp | INFO[2025-06-16T11:51:56.0002+00:00] PoliciesDeployed Map: { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | DEBU[2025-06-16T11:51:56.0002+00:00] Policies Map After Undeployment : { policy-opa-pdp | "deployed_policies_dict": [ policy-opa-pdp | { policy-opa-pdp | "data": [ policy-opa-pdp | "node.slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy": [ policy-opa-pdp | "slice.capacity.check" policy-opa-pdp | ], policy-opa-pdp | "policy-id": "slice.capacity.check", policy-opa-pdp | "policy-version": "1.0.0" policy-opa-pdp | } policy-opa-pdp | ] policy-opa-pdp | } policy-opa-pdp | INFO[2025-06-16T11:51:56.0003+00:00] Processed policies_to_be_undeployed successfully policy-opa-pdp | 2025/06/16 11:51:56 KafkaProducer or producer produce message policy-opa-pdp | INFO[2025-06-16T11:51:56.0004+00:00] Sending PDP Status With Update Response policy-opa-pdp | DEBU[2025-06-16T11:51:56.0005+00:00] [OUT|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} policy-opa-pdp | INFO[2025-06-16T11:51:56.0007+00:00] PDP_STATUS Message Sent Successfully policy-opa-pdp | DEBU[2025-06-16T11:51:56.0007+00:00] 0 policy-opa-pdp | DEBU[2025-06-16T11:51:56.0088+00:00] [IN|KAFKA|policy-pdp-pap] policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} policy-opa-pdp | DEBU[2025-06-16T11:51:56.0089+00:00] messageType: PDP_STATUS policy-opa-pdp | DEBU[2025-06-16T11:51:56.0089+00:00] discarding event of type PDP_STATUS policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.5:9092) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | policy-pap | :: Spring Boot :: (v3.4.6) policy-pap | policy-pap | [2025-06-16T11:47:18.165+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 60 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2025-06-16T11:47:18.167+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" policy-pap | [2025-06-16T11:47:19.518+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2025-06-16T11:47:19.602+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 7 JPA repository interfaces. policy-pap | [2025-06-16T11:47:20.517+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) policy-pap | [2025-06-16T11:47:20.530+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T11:47:20.532+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2025-06-16T11:47:20.532+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] policy-pap | [2025-06-16T11:47:20.575+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2025-06-16T11:47:20.575+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2353 ms policy-pap | [2025-06-16T11:47:21.020+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2025-06-16T11:47:21.095+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final policy-pap | [2025-06-16T11:47:21.151+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2025-06-16T11:47:21.537+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2025-06-16T11:47:21.578+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2025-06-16T11:47:21.795+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@c96c497 policy-pap | [2025-06-16T11:47:21.797+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2025-06-16T11:47:21.880+00:00|INFO|pooling|main] HHH10001005: Database info: policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] policy-pap | Database driver: undefined/unknown policy-pap | Database version: 16.4 policy-pap | Autocommit mode: undefined/unknown policy-pap | Isolation level: undefined/unknown policy-pap | Minimum pool size: undefined/unknown policy-pap | Maximum pool size: undefined/unknown policy-pap | [2025-06-16T11:47:23.741+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2025-06-16T11:47:23.745+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2025-06-16T11:47:24.974+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T11:47:25.026+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:25.164+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074445163 policy-pap | [2025-06-16T11:47:25.166+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-1, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T11:47:25.167+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T11:47:25.167+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074445175 policy-pap | [2025-06-16T11:47:25.175+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T11:47:25.489+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2025-06-16T11:47:25.616+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2025-06-16T11:47:25.692+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager policy-pap | [2025-06-16T11:47:25.914+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. policy-pap | [2025-06-16T11:47:26.761+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' policy-pap | [2025-06-16T11:47:26.872+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2025-06-16T11:47:26.892+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' policy-pap | [2025-06-16T11:47:26.913+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2025-06-16T11:47:26.913+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2025-06-16T11:47:26.914+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2025-06-16T11:47:26.915+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2025-06-16T11:47:26.915+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2025-06-16T11:47:26.916+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@494e502c policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 3e2c39b7-eef4-42b5-bb62-dddcc04b4db7 policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T11:47:26.927+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:26.934+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446934 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@65450878 policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T11:47:26.935+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | enable.metrics.push = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | group.protocol = classic policy-pap | group.remote.assignor = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2025-06-16T11:47:26.936+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446941 policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2025-06-16T11:47:26.941+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=c23900ec-fda7-4b47-a08c-365f5571c5be, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T11:47:26.942+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2025-06-16T11:47:26.942+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e2695bc6-c57e-4b98-b4cd-fa67d17e9724, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T11:47:26.953+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T11:47:26.954+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:26.966+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446982 policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e2695bc6-c57e-4b98-b4cd-fa67d17e9724, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T11:47:26.982+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4264fa19-4f07-49fd-b544-73b85dbe7390, alive=false, publisher=null]]: starting policy-pap | [2025-06-16T11:47:26.983+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.gzip.level = -1 policy-pap | compression.lz4.level = 9 policy-pap | compression.type = none policy-pap | compression.zstd.level = 3 policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | enable.metrics.push = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metadata.recovery.strategy = none policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.max.ms = 1000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.header.urlencode = false policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2025-06-16T11:47:26.983+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector policy-pap | [2025-06-16T11:47:26.984+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2025-06-16T11:47:26.988+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1750074446988 policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4264fa19-4f07-49fd-b544-73b85dbe7390, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2025-06-16T11:47:26.989+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2025-06-16T11:47:26.991+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2025-06-16T11:47:26.991+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2025-06-16T11:47:26.992+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2025-06-16T11:47:26.993+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2025-06-16T11:47:26.993+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2025-06-16T11:47:26.995+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2025-06-16T11:47:26.995+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.561 seconds (process running for 10.113) policy-pap | [2025-06-16T11:47:27.409+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A policy-pap | [2025-06-16T11:47:27.409+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2025-06-16T11:47:27.409+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A policy-pap | [2025-06-16T11:47:27.411+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A policy-pap | [2025-06-16T11:47:27.440+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2025-06-16T11:47:27.441+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T11:47:27.441+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Cluster ID: Y_BS0uSaQHW9oN2tPXU35A policy-pap | [2025-06-16T11:47:27.441+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2025-06-16T11:47:27.559+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T11:47:27.576+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T11:47:28.274+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T11:47:28.284+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] (Re-)joining group policy-pap | [2025-06-16T11:47:28.313+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Request joining group due to: need to re-join with the given member-id: consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573 policy-pap | [2025-06-16T11:47:28.314+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] (Re-)joining group policy-pap | [2025-06-16T11:47:29.052+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2025-06-16T11:47:29.055+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T11:47:29.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1 policy-pap | [2025-06-16T11:47:29.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2025-06-16T11:47:31.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573', protocol='range'} policy-pap | [2025-06-16T11:47:31.354+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Finished assignment for group at generation 1: {consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T11:47:31.403+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3-2a22e032-03cd-4275-8d5a-b5e00b723573', protocol='range'} policy-pap | [2025-06-16T11:47:31.404+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T11:47:31.409+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T11:47:31.429+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T11:47:31.452+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-3e2c39b7-eef4-42b5-bb62-dddcc04b4db7-3, groupId=3e2c39b7-eef4-42b5-bb62-dddcc04b4db7] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T11:47:32.067+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1', protocol='range'} policy-pap | [2025-06-16T11:47:32.068+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2025-06-16T11:47:32.074+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-9d2a71e4-8b7d-42af-bb40-a70da9daaae1', protocol='range'} policy-pap | [2025-06-16T11:47:32.075+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2025-06-16T11:47:32.075+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2025-06-16T11:47:32.077+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2025-06-16T11:47:32.079+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2025-06-16T11:47:41.609+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2025-06-16T11:47:41.609+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | [2025-06-16T11:47:41.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | [2025-06-16T11:49:22.294+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2025-06-16T11:49:22.295+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:22.295+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8fb32c29-a3ed-44d5-96e3-0ab34a1fe22a","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1750074562252","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:22.302+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:49:22.843+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:49:22.844+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] policy-pap | [2025-06-16T11:49:22.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:49:22.848+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:22.895+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:22.896+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:49:22.896+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"22460fd0-d018-424b-9e75-a16791862685","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:22.897+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:49:22.928+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:22.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"22460fd0-d018-424b-9e75-a16791862685","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"b8ba1bde-364f-4370-a57f-d5179887a823","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562917","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 22460fd0-d018-424b-9e75-a16791862685 policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:49:22.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:49:22.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:49:22.944+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af start publishing next request policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting listener policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T11:49:22.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting timer policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange starting enqueue policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange started policy-pap | [2025-06-16T11:49:22.947+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] policy-pap | [2025-06-16T11:49:22.948+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:22.960+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:22.960+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T11:49:22.968+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:22.969+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8379edd2-a036-4816-ae54-58c6e71b95ed policy-pap | [2025-06-16T11:49:22.973+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} policy-pap | [2025-06-16T11:49:23.226+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8379edd2-a036-4816-ae54-58c6e71b95ed","timestampMs":1750074562822,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:23.226+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"8379edd2-a036-4816-ae54-58c6e71b95ed","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eebb1bfa-e91d-441d-a67b-4a36e4be4a62","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074562957","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping enqueue policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping timer policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopping listener policy-pap | [2025-06-16T11:49:23.228+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange stopped policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpStateChange successful policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af start publishing next request policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a031b63c-0de0-4623-977c-96546b52eeee, expireMs=1750074593229] policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:49:23.229+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a031b63c-0de0-4623-977c-96546b52eeee","timestampMs":1750074563220,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:49:23.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a031b63c-0de0-4623-977c-96546b52eeee, expireMs=1750074593229] policy-pap | [2025-06-16T11:49:23.245+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:49:23.246+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:49:23.246+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"a031b63c-0de0-4623-977c-96546b52eeee","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e34d4966-145b-4b0a-ad96-da8f7417142f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074563234","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:49:23.247+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a031b63c-0de0-4623-977c-96546b52eeee policy-pap | [2025-06-16T11:49:23.251+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:49:23.251+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:49:26.994+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-16T11:49:52.844+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=22460fd0-d018-424b-9e75-a16791862685, expireMs=1750074592844] policy-pap | [2025-06-16T11:49:52.947+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=8379edd2-a036-4816-ae54-58c6e71b95ed, expireMs=1750074592947] policy-pap | [2025-06-16T11:50:22.266+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:50:22.272+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"e8df36f5-6aa2-4f66-bdc8-a1add3dbce9d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074622253","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:50:22.278+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T11:50:40.257+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup policy-pap | [2025-06-16T11:50:40.258+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-6] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-16T11:50:40.259+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering a deploy for policy zoneB 1.0.6 policy-pap | [2025-06-16T11:50:40.260+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 policy-pap | [2025-06-16T11:50:40.261+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup policy-pap | [2025-06-16T11:50:40.261+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup policy-pap | [2025-06-16T11:50:40.276+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T11:50:40Z, user=policyadmin)] policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:50:40.304+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:50:40.305+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] policy-pap | [2025-06-16T11:50:40.305+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:50:40.312+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:50:40.312+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:50:40.314+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8e087ef1-2fd0-46b9-9508-582ab8231512","timestampMs":1750074640260,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:50:40.314+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:50:40.355+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:50:40.356+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:50:40.358+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"8e087ef1-2fd0-46b9-9508-582ab8231512","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"c8d844f0-2569-4217-ba98-fc567023d825","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074640344","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:50:40.358+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8e087ef1-2fd0-46b9-9508-582ab8231512 policy-pap | [2025-06-16T11:50:40.367+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:50:40.367+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:50:40.368+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T11:51:04.786+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-9] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering an undeploy for policy zoneB 1.0.6 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup policy-pap | [2025-06-16T11:51:04.787+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup policy-pap | [2025-06-16T11:51:04.798+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:04Z, user=policyadmin)] policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=81b29182-51c9-4f5a-a7a1-52cae730ca23, expireMs=1750074694809] policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:51:04.809+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:04.826+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:04.827+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:04.834+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"81b29182-51c9-4f5a-a7a1-52cae730ca23","timestampMs":1750074664787,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:04.835+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:04.838+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:04.838+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 81b29182-51c9-4f5a-a7a1-52cae730ca23 policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"81b29182-51c9-4f5a-a7a1-52cae730ca23","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"8e45902c-5cf8-4c4f-947a-2e54b3c310ac","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074664827","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=81b29182-51c9-4f5a-a7a1-52cae730ca23, expireMs=1750074694809] policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:51:04.839+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:51:04.868+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-16T11:51:05.196+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup policy-pap | [2025-06-16T11:51:05.198+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-10] failed to undeploy policy: zoneB null policy-pap | [2025-06-16T11:51:05.199+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-10] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-1] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 policy-pap | [2025-06-16T11:51:05.897+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy vehicle 1.0.6 policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup policy-pap | [2025-06-16T11:51:05.898+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup policy-pap | [2025-06-16T11:51:05.907+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-16T11:51:05Z, user=policyadmin)] policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=242d1125-bfd6-47d9-a88c-f3dec38b8930, expireMs=1750074695914] policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:51:05.914+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:05.922+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:05.922+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:05.923+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"242d1125-bfd6-47d9-a88c-f3dec38b8930","timestampMs":1750074665898,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:05.923+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:05.963+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=242d1125-bfd6-47d9-a88c-f3dec38b8930, expireMs=1750074695914] policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:51:05.964+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:51:05.967+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"242d1125-bfd6-47d9-a88c-f3dec38b8930","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"86ea0fc2-0691-4760-9a5b-22718436e830","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074665951","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:05.968+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 242d1125-bfd6-47d9-a88c-f3dec38b8930 policy-pap | [2025-06-16T11:51:05.973+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:51:05.974+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:51:05.974+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T11:51:10.305+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=8e087ef1-2fd0-46b9-9508-582ab8231512, expireMs=1750074670304] policy-pap | [2025-06-16T11:51:22.931+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:22.931+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"37a36887-23a9-4721-97df-1773085f35c1","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074682918","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:22.932+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2025-06-16T11:51:27.004+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 policy-pap | [2025-06-16T11:51:30.283+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy vehicle 1.0.6 policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group opaGroup policy-pap | [2025-06-16T11:51:30.284+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group opaGroup policy-pap | [2025-06-16T11:51:30.291+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:30Z, user=policyadmin)] policy-pap | [2025-06-16T11:51:30.300+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:51:30.301+00:00|INFO|TimerManager|http-nio-6969-exec-2] update timer registered Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] policy-pap | [2025-06-16T11:51:30.302+00:00|INFO|ServiceManager|http-nio-6969-exec-2] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:51:30.303+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","timestampMs":1750074690284,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:30.310+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:30.321+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:30.321+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ac6fa7ae-3295-4484-b921-15eb49f2a5f5 policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ac6fa7ae-3295-4484-b921-15eb49f2a5f5","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"30e7d665-f3dd-4b60-8b08-574fb121d718","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074690311","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:51:30.336+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:51:30.343+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-16T11:51:30.681+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup policy-pap | [2025-06-16T11:51:30.682+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-3] failed to undeploy policy: vehicle null policy-pap | [2025-06-16T11:51:30.682+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-3] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group opaGroup policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-4] add policy abac 1.0.7 to subgroup opaGroup opa count=2 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering a deploy for policy abac 1.0.7 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=1 policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group opaGroup policy-pap | [2025-06-16T11:51:31.344+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group opaGroup policy-pap | [2025-06-16T11:51:31.350+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-16T11:51:31Z, user=policyadmin)] policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|TimerManager|http-nio-6969-exec-4] update timer registered Timer [name=db176b33-7fa1-414d-893a-c54fbbea91ea, expireMs=1750074721356] policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:51:31.356+00:00|INFO|ServiceManager|http-nio-6969-exec-4] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:51:31.357+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"db176b33-7fa1-414d-893a-c54fbbea91ea","timestampMs":1750074691344,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:31.364+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:31.394+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:31.394+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id db176b33-7fa1-414d-893a-c54fbbea91ea policy-pap | [2025-06-16T11:51:31.395+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"db176b33-7fa1-414d-893a-c54fbbea91ea","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"eb5489f9-6131-48a8-b898-103060841e49","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074691384","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=db176b33-7fa1-414d-893a-c54fbbea91ea, expireMs=1750074721356] policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:51:31.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:51:31.404+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 policy-pap | [2025-06-16T11:51:55.977+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy abac 1.0.7 policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-7f657737-d4a9-439c-8bcc-1ec79cd614af opaGroup opa policies=0 policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup policy-pap | [2025-06-16T11:51:55.978+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup policy-pap | [2025-06-16T11:51:55.984+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-16T11:51:55Z, user=policyadmin)] policy-pap | [2025-06-16T11:51:55.991+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting listener policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting timer policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=6125c77a-eecc-44d2-a582-c2c1c7662698, expireMs=1750074745992] policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate starting enqueue policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate started policy-pap | [2025-06-16T11:51:55.992+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:55.997+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:55.997+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:56.002+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-67eb747c-ae94-4608-a9dc-a7ac49c8c84e","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"6125c77a-eecc-44d2-a582-c2c1c7662698","timestampMs":1750074715978,"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","pdpGroup":"opaGroup","pdpSubgroup":"opa"} policy-pap | [2025-06-16T11:51:56.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2025-06-16T11:51:56.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:56.011+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6125c77a-eecc-44d2-a582-c2c1c7662698 policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6125c77a-eecc-44d2-a582-c2c1c7662698","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-7f657737-d4a9-439c-8bcc-1ec79cd614af","requestId":"2ecd18ed-59c0-4575-93c0-71d12bee4f3c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1750074716000","deploymentInstanceInfo":""} policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping enqueue policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping timer policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6125c77a-eecc-44d2-a582-c2c1c7662698, expireMs=1750074745992] policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopping listener policy-pap | [2025-06-16T11:51:56.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate stopped policy-pap | [2025-06-16T11:51:56.020+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af PdpUpdate successful policy-pap | [2025-06-16T11:51:56.020+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-7f657737-d4a9-439c-8bcc-1ec79cd614af has no more requests policy-pap | [2025-06-16T11:51:56.021+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} policy-pap | [2025-06-16T11:51:56.298+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup policy-pap | [2025-06-16T11:51:56.298+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: abac null policy-pap | [2025-06-16T11:51:56.298+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) policy-pap | [2025-06-16T11:52:00.301+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=ac6fa7ae-3295-4484-b921-15eb49f2a5f5, expireMs=1750074720301] postgres | The files belonging to this database system will be owned by user "postgres". postgres | This user must also own the server process. postgres | postgres | The database cluster will be initialized with locale "en_US.utf8". postgres | The default database encoding has accordingly been set to "UTF8". postgres | The default text search configuration will be set to "english". postgres | postgres | Data page checksums are disabled. postgres | postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok postgres | creating subdirectories ... ok postgres | selecting dynamic shared memory implementation ... posix postgres | selecting default max_connections ... 100 postgres | selecting default shared_buffers ... 128MB postgres | selecting default time zone ... Etc/UTC postgres | creating configuration files ... ok postgres | running bootstrap script ... ok postgres | performing post-bootstrap initialization ... ok postgres | syncing data to disk ... ok postgres | postgres | postgres | Success. You can now start the database server using: postgres | postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start postgres | postgres | initdb: warning: enabling "trust" authentication for local connections postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. postgres | waiting for server to start....2025-06-16 11:46:49.496 UTC [49] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 11:46:49.497 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 11:46:49.502 UTC [52] LOG: database system was shut down at 2025-06-16 11:46:49 UTC postgres | 2025-06-16 11:46:49.505 UTC [49] LOG: database system is ready to accept connections postgres | done postgres | server started postgres | postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf postgres | postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh postgres | #!/bin/bash -xv postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved postgres | # postgres | # Licensed under the Apache License, Version 2.0 (the "License"); postgres | # you may not use this file except in compliance with the License. postgres | # You may obtain a copy of the License at postgres | # postgres | # http://www.apache.org/licenses/LICENSE-2.0 postgres | # postgres | # Unless required by applicable law or agreed to in writing, software postgres | # distributed under the License is distributed on an "AS IS" BASIS, postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. postgres | # See the License for the specific language governing permissions and postgres | # limitations under the License. postgres | postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' postgres | CREATE ROLE postgres | postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | do postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" postgres | done postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' postgres | GRANT postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' postgres | GRANT postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' postgres | CREATE DATABASE postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' postgres | ALTER DATABASE postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' postgres | GRANT postgres | postgres | waiting for server to shut down....2025-06-16 11:46:50.978 UTC [49] LOG: received fast shutdown request postgres | 2025-06-16 11:46:50.981 UTC [49] LOG: aborting any active transactions postgres | 2025-06-16 11:46:50.985 UTC [49] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1 postgres | 2025-06-16 11:46:50.985 UTC [50] LOG: shutting down postgres | 2025-06-16 11:46:50.987 UTC [50] LOG: checkpoint starting: shutdown immediate postgres | 2025-06-16 11:46:51.413 UTC [50] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.335 s, sync=0.085 s, total=0.428 s; sync files=1788, longest=0.010 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 postgres | 2025-06-16 11:46:51.424 UTC [49] LOG: database system is shut down postgres | done postgres | server stopped postgres | postgres | PostgreSQL init process complete; ready for start up. postgres | postgres | 2025-06-16 11:46:51.504 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit postgres | 2025-06-16 11:46:51.505 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres | 2025-06-16 11:46:51.505 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres | 2025-06-16 11:46:51.507 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres | 2025-06-16 11:46:51.514 UTC [102] LOG: database system was shut down at 2025-06-16 11:46:51 UTC postgres | 2025-06-16 11:46:51.520 UTC [1] LOG: database system is ready to accept connections postgres | 2025-06-16 11:51:51.582 UTC [100] LOG: checkpoint starting: time postgres | 2025-06-16 11:52:56.484 UTC [100] LOG: checkpoint complete: wrote 650 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=64.874 s, sync=0.021 s, total=64.902 s; sync files=515, longest=0.002 s, average=0.001 s; distance=3534 kB, estimate=3534 kB; lsn=0/31502E0, redo lsn=0/314DDE0 prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" prometheus | time=2025-06-16T11:46:49.652Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" prometheus | time=2025-06-16T11:46:49.653Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs prometheus | time=2025-06-16T11:46:49.655Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 prometheus | time=2025-06-16T11:46:49.657Z level=INFO source=main.go:1266 msg="Starting TSDB ..." prometheus | time=2025-06-16T11:46:49.663Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 prometheus | time=2025-06-16T11:46:49.663Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=2.04µs prometheus | time=2025-06-16T11:46:49.664Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb prometheus | time=2025-06-16T11:46:49.665Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=327.165µs prometheus | time=2025-06-16T11:46:49.665Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=46.291µs wal_replay_duration=358.366µs wbl_replay_duration=210ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=2.04µs total_replay_duration=537.429µs prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1290 msg="TSDB started" prometheus | time=2025-06-16T11:46:49.668Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.81µs remote_storage=2.69µs web_handler=710ns query_engine=1.28µs scrape=280.214µs scrape_sd=258.445µs notify=152.912µs notify_sd=53.091µs rules=1.84µs tracing=6.33µs filename=/etc/prometheus/prometheus.yml totalDuration=1.684337ms prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." prometheus | time=2025-06-16T11:46:49.670Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2025-06-16 11:46:50,463] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,466] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,466] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,466] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,466] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,469] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 11:46:50,469] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 11:46:50,469] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2025-06-16 11:46:50,469] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2025-06-16 11:46:50,470] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2025-06-16 11:46:50,470] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,471] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,471] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,471] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,471] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2025-06-16 11:46:50,471] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2025-06-16 11:46:50,485] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2025-06-16 11:46:50,487] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 11:46:50,487] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2025-06-16 11:46:50,489] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 11:46:50,503] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,503] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,504] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,505] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,506] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,506] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper | [2025-06-16 11:46:50,507] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,507] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,513] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 11:46:50,513] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,514] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper | [2025-06-16 11:46:50,517] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,517] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,517] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 11:46:50,517] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper | [2025-06-16 11:46:50,518] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,540] INFO Logging initialized @403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper | [2025-06-16 11:46:50,601] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 11:46:50,602] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 11:46:50,623] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 11:46:50,669] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 11:46:50,669] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 11:46:50,670] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper | [2025-06-16 11:46:50,673] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper | [2025-06-16 11:46:50,681] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper | [2025-06-16 11:46:50,691] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper | [2025-06-16 11:46:50,691] INFO Started @558ms (org.eclipse.jetty.server.Server) zookeeper | [2025-06-16 11:46:50,691] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper | [2025-06-16 11:46:50,694] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 11:46:50,695] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper | [2025-06-16 11:46:50,696] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 11:46:50,696] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper | [2025-06-16 11:46:50,721] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 11:46:50,721] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper | [2025-06-16 11:46:50,722] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 11:46:50,722] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 11:46:50,726] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper | [2025-06-16 11:46:50,727] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 11:46:50,729] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper | [2025-06-16 11:46:50,730] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2025-06-16 11:46:50,730] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2025-06-16 11:46:50,736] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper | [2025-06-16 11:46:50,736] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper | [2025-06-16 11:46:50,752] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper | [2025-06-16 11:46:50,753] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper | [2025-06-16 11:46:51,807] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) Tearing down containers... Container policy-csit Stopping Container grafana Stopping Container policy-opa-pdp Stopping Container policy-csit Stopped Container policy-csit Removing Container policy-csit Removed Container grafana Stopped Container grafana Removing Container grafana Removed Container prometheus Stopping Container prometheus Stopped Container prometheus Removing Container prometheus Removed Container policy-opa-pdp Stopped Container policy-opa-pdp Removing Container policy-opa-pdp Removed Container policy-pap Stopping Container policy-pap Stopped Container policy-pap Removing Container policy-pap Removed Container policy-api Stopping Container kafka Stopping Container kafka Stopped Container kafka Removing Container kafka Removed Container zookeeper Stopping Container zookeeper Stopped Container zookeeper Removing Container zookeeper Removed Container policy-api Stopped Container policy-api Removing Container policy-api Removed Container policy-db-migrator Stopping Container policy-db-migrator Stopped Container policy-db-migrator Removing Container policy-db-migrator Removed Container postgres Stopping Container postgres Stopped Container postgres Removing Container postgres Removed Network compose_default Removing Network compose_default Removed $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2075 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins17740530698788201198.sh ---> sysstat.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins11928958053683763973.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp ']' + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/archives/ [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins14212807151681641966.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2536458372897164201.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp@tmp/config12631572253889805859tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins5378522003727158682.sh ---> create-netrc.sh [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins11660961740779801257.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2874718829872972836.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash /tmp/jenkins2455489172863964941.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-opa-pdp-master-project-csit-policy-opa-pdp] $ /bin/bash -l /tmp/jenkins8958251358272619223.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-policy-opa-pdp/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-QaKm from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-QaKm/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-policy-opa-pdp/179 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21584 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 897 24029 0 7239 30813 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:0f:da:b6 brd ff:ff:ff:ff:ff:ff inet 10.30.106.89/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85803sec preferred_lft 85803sec inet6 fe80::f816:3eff:fe0f:dab6/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:14:8a:99:63 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:14ff:fe8a:9963/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21584) 06/16/25 _x86_64_ (8 CPU) 11:44:20 LINUX RESTART (8 CPU) 11:45:01 tps rtps wtps bread/s bwrtn/s 11:46:01 171.05 37.31 133.74 2922.45 73115.81 11:47:01 734.59 4.88 729.71 493.92 233484.29 11:48:01 29.10 0.07 29.03 3.07 7288.79 11:49:01 4.50 0.00 4.50 0.00 114.51 11:50:01 43.91 0.22 43.69 34.98 7335.30 11:51:01 177.64 0.28 177.35 15.06 26740.88 11:52:01 10.51 0.00 10.51 0.00 239.43 11:53:01 25.41 0.02 25.40 4.27 416.20 11:54:01 54.42 1.27 53.16 89.45 2228.43 Average: 139.19 4.90 134.29 396.58 39054.69 11:45:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:46:01 29802268 31679092 3136952 9.52 77184 2101728 1529496 4.50 902244 1924656 338680 11:47:01 24443096 31003320 8496124 25.79 161688 6487168 6171872 18.16 1782616 6075824 52148 11:48:01 23414264 30072272 9524956 28.92 163576 6586636 7309216 21.51 2793708 6082196 468 11:49:01 23398312 30056652 9540908 28.97 163748 6587180 7534168 22.17 2808484 6082380 368 11:50:01 23070516 29951412 9868704 29.96 176700 6773860 7817260 23.00 2965360 6223076 18456 11:51:01 22698936 29898228 10240284 31.09 204872 7034144 7915936 23.29 3082244 6446800 2240 11:52:01 22687576 29888300 10251644 31.12 205008 7034936 7949304 23.39 3097596 6441560 48 11:53:01 22870640 30029628 10068580 30.57 205260 6997352 7364520 21.67 2971028 6395448 264 11:54:01 24586500 31540316 8352720 25.36 206704 6780748 1619836 4.77 1512112 6202632 11180 Average: 24108012 30457691 8831208 26.81 173860 6264861 6134623 18.05 2435044 5763841 47095 11:45:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:46:01 lo 8.93 8.93 0.86 0.86 0.00 0.00 0.00 0.00 11:46:01 ens3 411.96 304.32 3804.06 28.50 0.00 0.00 0.00 0.00 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:47:01 vethebbbfbd 1.60 1.65 0.16 0.17 0.00 0.00 0.00 0.00 11:47:01 br-1d334709b040 37.76 46.88 2.39 311.36 0.00 0.00 0.00 0.00 11:47:01 vetha721af5 44.31 60.74 6.75 8.52 0.00 0.00 0.00 0.00 11:47:01 lo 6.13 6.13 0.55 0.55 0.00 0.00 0.00 0.00 11:48:01 vethebbbfbd 9.45 8.67 1.18 1.26 0.00 0.00 0.00 0.00 11:48:01 br-1d334709b040 0.37 0.27 0.02 0.02 0.00 0.00 0.00 0.00 11:48:01 vetha721af5 106.15 112.20 21.13 18.22 0.00 0.00 0.00 0.00 11:48:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:49:01 vethebbbfbd 13.03 8.78 1.10 1.23 0.00 0.00 0.00 0.00 11:49:01 br-1d334709b040 0.38 0.22 0.02 0.01 0.00 0.00 0.00 0.00 11:49:01 vetha721af5 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:49:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:50:01 vethebbbfbd 15.76 10.91 1.61 1.61 0.00 0.00 0.00 0.00 11:50:01 br-1d334709b040 0.20 0.27 0.02 0.02 0.00 0.00 0.00 0.00 11:50:01 vetha721af5 100.05 100.56 25.15 11.39 0.00 0.00 0.00 0.00 11:50:01 lo 2.17 2.17 0.18 0.18 0.00 0.00 0.00 0.00 11:51:01 vethebbbfbd 14.45 9.77 1.37 1.42 0.00 0.00 0.00 0.00 11:51:01 br-1d334709b040 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:01 vetha721af5 165.56 166.39 40.93 18.17 0.00 0.00 0.00 0.00 11:51:01 lo 1.40 1.40 0.11 0.11 0.00 0.00 0.00 0.00 11:52:01 vethebbbfbd 17.85 13.35 2.19 2.00 0.00 0.00 0.00 0.00 11:52:01 br-1d334709b040 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:52:01 vetha721af5 683.85 686.92 166.11 73.93 0.00 0.00 0.00 0.01 11:52:01 lo 1.20 1.20 0.09 0.09 0.00 0.00 0.00 0.00 11:53:01 vethebbbfbd 13.71 9.08 1.15 1.29 0.00 0.00 0.00 0.00 11:53:01 br-1d334709b040 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:53:01 vetha721af5 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 11:53:01 lo 3.60 3.60 0.31 0.31 0.00 0.00 0.00 0.00 11:54:01 lo 0.47 0.47 0.05 0.05 0.00 0.00 0.00 0.00 11:54:01 ens3 2166.29 1318.51 37438.79 195.03 0.00 0.00 0.00 0.00 11:54:01 docker0 118.56 179.80 7.74 1349.26 0.00 0.00 0.00 0.00 Average: lo 2.95 2.95 0.26 0.26 0.00 0.00 0.00 0.00 Average: ens3 204.29 121.44 4075.16 13.84 0.00 0.00 0.00 0.00 Average: docker0 13.20 20.02 0.86 150.20 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21584) 06/16/25 _x86_64_ (8 CPU) 11:44:20 LINUX RESTART (8 CPU) 11:45:01 CPU %user %nice %system %iowait %steal %idle 11:46:01 all 9.30 0.00 0.85 4.47 0.04 85.34 11:46:01 0 21.56 0.00 1.87 2.54 0.10 73.93 11:46:01 1 3.19 0.00 0.64 4.35 0.03 91.79 11:46:01 2 6.87 0.00 0.69 0.37 0.03 92.05 11:46:01 3 0.95 0.00 0.28 0.17 0.02 98.58 11:46:01 4 6.44 0.00 0.72 4.52 0.03 88.28 11:46:01 5 3.01 0.00 0.38 21.38 0.03 75.19 11:46:01 6 13.63 0.00 1.10 0.93 0.03 84.30 11:46:01 7 18.79 0.00 1.17 1.51 0.05 78.48 11:47:01 all 18.73 0.00 7.52 12.66 0.08 61.01 11:47:01 0 18.64 0.00 7.04 4.41 0.08 69.82 11:47:01 1 18.30 0.00 7.86 15.64 0.07 58.14 11:47:01 2 19.82 0.00 7.45 11.22 0.08 61.43 11:47:01 3 18.72 0.00 7.98 30.29 0.08 42.93 11:47:01 4 18.53 0.00 6.76 12.11 0.10 62.49 11:47:01 5 18.21 0.00 7.50 6.65 0.07 67.57 11:47:01 6 17.39 0.00 7.93 15.37 0.07 59.24 11:47:01 7 20.26 0.00 7.63 5.74 0.07 66.30 11:48:01 all 19.23 0.00 1.68 0.28 0.07 78.74 11:48:01 0 19.82 0.00 1.75 0.07 0.05 78.31 11:48:01 1 23.47 0.00 1.97 0.67 0.07 73.82 11:48:01 2 23.97 0.00 1.76 0.12 0.07 74.09 11:48:01 3 16.51 0.00 1.49 0.69 0.07 81.24 11:48:01 4 20.00 0.00 1.71 0.02 0.07 78.20 11:48:01 5 17.61 0.00 1.72 0.07 0.07 80.54 11:48:01 6 18.59 0.00 1.44 0.10 0.08 79.79 11:48:01 7 13.90 0.00 1.55 0.50 0.05 83.99 11:49:01 all 0.70 0.00 0.14 0.02 0.03 99.11 11:49:01 0 0.78 0.00 0.12 0.00 0.03 99.07 11:49:01 1 0.85 0.00 0.12 0.00 0.02 99.02 11:49:01 2 0.55 0.00 0.10 0.02 0.03 99.30 11:49:01 3 0.28 0.00 0.08 0.00 0.02 99.61 11:49:01 4 1.15 0.00 0.23 0.00 0.07 98.55 11:49:01 5 0.55 0.00 0.17 0.02 0.03 99.23 11:49:01 6 0.65 0.00 0.20 0.00 0.03 99.12 11:49:01 7 0.77 0.00 0.17 0.10 0.03 98.93 11:50:01 all 3.36 0.00 0.77 0.23 0.04 95.60 11:50:01 0 2.97 0.00 0.59 0.00 0.03 96.40 11:50:01 1 3.64 0.00 0.88 1.04 0.05 94.38 11:50:01 2 2.58 0.00 0.58 0.03 0.03 96.77 11:50:01 3 2.86 0.00 0.72 0.14 0.03 96.25 11:50:01 4 2.89 0.00 1.09 0.12 0.07 95.84 11:50:01 5 5.06 0.00 0.73 0.00 0.05 94.16 11:50:01 6 2.62 0.00 0.56 0.02 0.05 96.76 11:50:01 7 4.23 0.00 0.95 0.46 0.05 94.31 11:51:01 all 7.35 0.00 1.89 1.36 0.07 89.32 11:51:01 0 4.97 0.00 1.66 2.75 0.05 90.57 11:51:01 1 13.70 0.00 2.38 1.06 0.08 82.78 11:51:01 2 10.75 0.00 1.83 0.18 0.05 87.19 11:51:01 3 3.60 0.00 1.76 2.03 0.07 92.54 11:51:01 4 7.14 0.00 2.33 0.07 0.07 90.40 11:51:01 5 4.45 0.00 0.97 0.03 0.07 94.48 11:51:01 6 10.59 0.00 2.49 0.27 0.07 86.58 11:51:01 7 3.59 0.00 1.78 4.53 0.08 90.02 11:52:01 all 3.61 0.00 0.62 0.05 0.05 95.66 11:52:01 0 3.65 0.00 0.40 0.03 0.05 95.86 11:52:01 1 2.87 0.00 0.60 0.00 0.03 96.50 11:52:01 2 3.16 0.00 0.42 0.25 0.05 96.12 11:52:01 3 3.24 0.00 0.70 0.00 0.07 95.99 11:52:01 4 4.36 0.00 0.55 0.02 0.07 95.01 11:52:01 5 4.75 0.00 0.60 0.02 0.07 94.57 11:52:01 6 3.56 0.00 0.60 0.03 0.05 95.76 11:52:01 7 3.31 0.00 1.09 0.03 0.05 95.52 11:53:01 all 1.45 0.00 0.42 0.06 0.04 98.03 11:53:01 0 1.15 0.00 0.43 0.00 0.05 98.36 11:53:01 1 1.40 0.00 0.40 0.00 0.03 98.17 11:53:01 2 1.02 0.00 0.45 0.03 0.03 98.46 11:53:01 3 1.74 0.00 0.40 0.02 0.05 97.80 11:53:01 4 1.25 0.00 0.45 0.02 0.07 98.21 11:53:01 5 1.19 0.00 0.35 0.22 0.03 98.21 11:53:01 6 1.84 0.00 0.42 0.00 0.05 97.70 11:53:01 7 2.03 0.00 0.42 0.20 0.03 97.32 11:54:01 all 5.80 0.00 0.66 0.21 0.03 93.29 11:54:01 0 3.35 0.00 0.62 0.05 0.03 95.95 11:54:01 1 0.58 0.00 0.33 0.07 0.02 99.00 11:54:01 2 1.39 0.00 0.48 0.12 0.07 97.95 11:54:01 3 13.28 0.00 0.73 0.20 0.03 85.75 11:54:01 4 0.80 0.00 0.43 0.05 0.02 98.70 11:54:01 5 9.72 0.00 0.92 0.08 0.03 89.25 11:54:01 6 16.39 0.00 1.18 0.07 0.05 82.31 11:54:01 7 0.92 0.00 0.58 1.07 0.02 97.41 Average: all 7.72 0.00 1.61 2.14 0.05 88.48 Average: 0 8.54 0.00 1.61 1.09 0.05 88.71 Average: 1 7.55 0.00 1.68 2.52 0.04 88.21 Average: 2 7.78 0.00 1.52 1.37 0.05 89.28 Average: 3 6.78 0.00 1.56 3.69 0.05 87.92 Average: 4 6.94 0.00 1.58 1.87 0.06 89.54 Average: 5 7.17 0.00 1.48 3.17 0.05 88.13 Average: 6 9.47 0.00 1.76 1.85 0.05 86.86 Average: 7 7.52 0.00 1.70 1.57 0.05 89.17