14:54:10 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/141264 14:54:10 Running as SYSTEM 14:54:10 [EnvInject] - Loading node environment variables. 14:54:10 Building remotely on prd-ubuntu1804-docker-8c-8g-20900 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 14:54:10 [ssh-agent] Looking for ssh-agent implementation... 14:54:10 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:54:10 $ ssh-agent 14:54:10 SSH_AUTH_SOCK=/tmp/ssh-Q8Jjbn3Tk4oF/agent.2048 14:54:10 SSH_AGENT_PID=2050 14:54:10 [ssh-agent] Started. 14:54:10 Running ssh-add (command line suppressed) 14:54:10 Identity added: /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_6277709602169040896.key (/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/private_key_6277709602169040896.key) 14:54:10 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:54:10 The recommended git tool is: NONE 14:54:11 using credential onap-jenkins-ssh 14:54:12 Wiping out workspace first. 14:54:12 Cloning the remote Git repository 14:54:12 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:54:12 > git init /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp # timeout=10 14:54:12 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:54:12 > git --version # timeout=10 14:54:12 > git --version # 'git version 2.17.1' 14:54:12 using GIT_SSH to set credentials Gerrit user 14:54:12 Verifying host key using manually-configured host key entries 14:54:12 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:54:12 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:54:12 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:54:13 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:54:13 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:54:13 using GIT_SSH to set credentials Gerrit user 14:54:13 Verifying host key using manually-configured host key entries 14:54:13 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/64/141264/1 # timeout=30 14:54:13 > git rev-parse 473f78ecac5fb75e5968b31a5bab95eaba72c803^{commit} # timeout=10 14:54:13 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 14:54:13 Checking out Revision 473f78ecac5fb75e5968b31a5bab95eaba72c803 (refs/changes/64/141264/1) 14:54:13 > git config core.sparsecheckout # timeout=10 14:54:13 > git checkout -f 473f78ecac5fb75e5968b31a5bab95eaba72c803 # timeout=30 14:54:16 Commit message: "Add Fix fail handling in ACM runtime in CSIT" 14:54:16 > git rev-parse FETCH_HEAD^{commit} # timeout=10 14:54:16 > git rev-list --no-walk 8746ba7d00fb7412b3f40b6e85f47ce67cf7969c # timeout=10 14:54:16 provisioning config files... 14:54:16 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:54:16 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:54:16 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins14001272576066499802.sh 14:54:16 ---> python-tools-install.sh 14:54:16 Setup pyenv: 14:54:17 * system (set by /opt/pyenv/version) 14:54:17 * 3.8.13 (set by /opt/pyenv/version) 14:54:17 * 3.9.13 (set by /opt/pyenv/version) 14:54:17 * 3.10.6 (set by /opt/pyenv/version) 14:54:21 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-4ajy 14:54:21 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:54:25 lf-activate-venv(): INFO: Installing: lftools 14:55:09 lf-activate-venv(): INFO: Adding /tmp/venv-4ajy/bin to PATH 14:55:09 Generating Requirements File 14:55:34 Python 3.10.6 14:55:34 pip 25.1.1 from /tmp/venv-4ajy/lib/python3.10/site-packages/pip (python 3.10) 14:55:34 appdirs==1.4.4 14:55:34 argcomplete==3.6.2 14:55:34 aspy.yaml==1.3.0 14:55:34 attrs==25.3.0 14:55:34 autopage==0.5.2 14:55:34 beautifulsoup4==4.13.4 14:55:34 boto3==1.38.36 14:55:34 botocore==1.38.36 14:55:34 bs4==0.0.2 14:55:34 cachetools==5.5.2 14:55:34 certifi==2025.4.26 14:55:34 cffi==1.17.1 14:55:34 cfgv==3.4.0 14:55:34 chardet==5.2.0 14:55:34 charset-normalizer==3.4.2 14:55:34 click==8.2.1 14:55:34 cliff==4.10.0 14:55:34 cmd2==2.6.1 14:55:34 cryptography==3.3.2 14:55:34 debtcollector==3.0.0 14:55:34 decorator==5.2.1 14:55:34 defusedxml==0.7.1 14:55:34 Deprecated==1.2.18 14:55:34 distlib==0.3.9 14:55:34 dnspython==2.7.0 14:55:34 docker==7.1.0 14:55:34 dogpile.cache==1.4.0 14:55:34 durationpy==0.10 14:55:34 email_validator==2.2.0 14:55:34 filelock==3.18.0 14:55:34 future==1.0.0 14:55:34 gitdb==4.0.12 14:55:34 GitPython==3.1.44 14:55:34 google-auth==2.40.3 14:55:34 httplib2==0.22.0 14:55:34 identify==2.6.12 14:55:34 idna==3.10 14:55:34 importlib-resources==1.5.0 14:55:34 iso8601==2.1.0 14:55:34 Jinja2==3.1.6 14:55:34 jmespath==1.0.1 14:55:34 jsonpatch==1.33 14:55:34 jsonpointer==3.0.0 14:55:34 jsonschema==4.24.0 14:55:34 jsonschema-specifications==2025.4.1 14:55:34 keystoneauth1==5.11.1 14:55:34 kubernetes==33.1.0 14:55:34 lftools==0.37.13 14:55:34 lxml==5.4.0 14:55:34 MarkupSafe==3.0.2 14:55:34 msgpack==1.1.1 14:55:34 multi_key_dict==2.0.3 14:55:34 munch==4.0.0 14:55:34 netaddr==1.3.0 14:55:34 niet==1.4.2 14:55:34 nodeenv==1.9.1 14:55:34 oauth2client==4.1.3 14:55:34 oauthlib==3.2.2 14:55:34 openstacksdk==4.6.0 14:55:34 os-client-config==2.1.0 14:55:34 os-service-types==1.7.0 14:55:34 osc-lib==4.0.2 14:55:34 oslo.config==9.8.0 14:55:34 oslo.context==6.0.0 14:55:34 oslo.i18n==6.5.1 14:55:34 oslo.log==7.1.0 14:55:34 oslo.serialization==5.7.0 14:55:34 oslo.utils==9.0.0 14:55:34 packaging==25.0 14:55:34 pbr==6.1.1 14:55:34 platformdirs==4.3.8 14:55:34 prettytable==3.16.0 14:55:34 psutil==7.0.0 14:55:34 pyasn1==0.6.1 14:55:34 pyasn1_modules==0.4.2 14:55:34 pycparser==2.22 14:55:34 pygerrit2==2.0.15 14:55:34 PyGithub==2.6.1 14:55:34 PyJWT==2.10.1 14:55:34 PyNaCl==1.5.0 14:55:34 pyparsing==2.4.7 14:55:34 pyperclip==1.9.0 14:55:34 pyrsistent==0.20.0 14:55:34 python-cinderclient==9.7.0 14:55:34 python-dateutil==2.9.0.post0 14:55:34 python-heatclient==4.2.0 14:55:34 python-jenkins==1.8.2 14:55:34 python-keystoneclient==5.6.0 14:55:34 python-magnumclient==4.8.1 14:55:34 python-openstackclient==8.1.0 14:55:34 python-swiftclient==4.8.0 14:55:34 PyYAML==6.0.2 14:55:34 referencing==0.36.2 14:55:34 requests==2.32.4 14:55:34 requests-oauthlib==2.0.0 14:55:34 requestsexceptions==1.4.0 14:55:34 rfc3986==2.0.0 14:55:34 rpds-py==0.25.1 14:55:34 rsa==4.9.1 14:55:34 ruamel.yaml==0.18.14 14:55:34 ruamel.yaml.clib==0.2.12 14:55:34 s3transfer==0.13.0 14:55:34 simplejson==3.20.1 14:55:34 six==1.17.0 14:55:34 smmap==5.0.2 14:55:34 soupsieve==2.7 14:55:34 stevedore==5.4.1 14:55:34 tabulate==0.9.0 14:55:34 toml==0.10.2 14:55:34 tomlkit==0.13.3 14:55:34 tqdm==4.67.1 14:55:34 typing_extensions==4.14.0 14:55:34 tzdata==2025.2 14:55:34 urllib3==1.26.20 14:55:34 virtualenv==20.31.2 14:55:34 wcwidth==0.2.13 14:55:34 websocket-client==1.8.0 14:55:34 wrapt==1.17.2 14:55:34 xdg==6.0.0 14:55:34 xmltodict==0.14.2 14:55:34 yq==3.4.3 14:55:34 [EnvInject] - Injecting environment variables from a build step. 14:55:34 [EnvInject] - Injecting as environment variables the properties content 14:55:34 SET_JDK_VERSION=openjdk17 14:55:34 GIT_URL="git://cloud.onap.org/mirror" 14:55:34 14:55:34 [EnvInject] - Variables injected successfully. 14:55:34 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh /tmp/jenkins4016610895654884435.sh 14:55:34 ---> update-java-alternatives.sh 14:55:34 ---> Updating Java version 14:55:34 ---> Ubuntu/Debian system detected 14:55:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:55:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:55:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:55:36 openjdk version "17.0.4" 2022-07-19 14:55:36 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:55:36 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:55:36 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:55:36 [EnvInject] - Injecting environment variables from a build step. 14:55:36 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:55:36 [EnvInject] - Variables injected successfully. 14:55:36 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/sh -xe /tmp/jenkins12602376119315093452.sh 14:55:36 + /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/run-project-csit.sh opa-pdp 14:55:37 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:55:37 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:55:37 Configure a credential helper to remove this warning. See 14:55:37 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:55:37 14:55:37 Login Succeeded 14:55:37 docker: 'compose' is not a docker command. 14:55:37 See 'docker --help' 14:55:37 Docker Compose Plugin not installed. Installing now... 14:55:37 % Total % Received % Xferd Average Speed Time Time Time Current 14:55:37 Dload Upload Total Spent Left Speed 14:55:37 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:55:38 0 60.2M 0 175k 0 0 1017k 0 0:01:00 --:--:-- 0:01:00 1017k 100 60.2M 100 60.2M 0 0 66.0M 0 --:--:-- --:--:-- --:--:-- 81.3M 14:55:38 Setting project configuration for: opa-pdp 14:55:38 Configuring docker compose... 14:55:40 Starting opa-pdp using postgres + Grafana/Prometheus 14:55:40 prometheus Pulling 14:55:40 postgres Pulling 14:55:40 zookeeper Pulling 14:55:40 kafka Pulling 14:55:40 pap Pulling 14:55:40 grafana Pulling 14:55:40 policy-db-migrator Pulling 14:55:40 api Pulling 14:55:40 opa-pdp Pulling 14:55:40 da9db072f522 Pulling fs layer 14:55:40 96e38c8865ba Pulling fs layer 14:55:40 5e06c6bed798 Pulling fs layer 14:55:40 684be6598fc9 Pulling fs layer 14:55:40 0d92cad902ba Pulling fs layer 14:55:40 dcc0c3b2850c Pulling fs layer 14:55:40 eb7cda286a15 Pulling fs layer 14:55:40 0d92cad902ba Waiting 14:55:40 dcc0c3b2850c Waiting 14:55:40 eb7cda286a15 Waiting 14:55:40 684be6598fc9 Waiting 14:55:40 f90c8eb4724c Pulling fs layer 14:55:40 2b1b549e99de Pulling fs layer 14:55:40 547372ea8ffa Pulling fs layer 14:55:40 65d25c0f02f3 Pulling fs layer 14:55:40 90dd78f85976 Pulling fs layer 14:55:40 4f4fb700ef54 Pulling fs layer 14:55:40 65d25c0f02f3 Waiting 14:55:40 f90c8eb4724c Waiting 14:55:40 2b1b549e99de Waiting 14:55:40 4f4fb700ef54 Waiting 14:55:40 547372ea8ffa Waiting 14:55:40 da9db072f522 Pulling fs layer 14:55:40 96e38c8865ba Pulling fs layer 14:55:40 e5d7009d9e55 Pulling fs layer 14:55:40 1ec5fb03eaee Pulling fs layer 14:55:40 d3165a332ae3 Pulling fs layer 14:55:40 c124ba1a8b26 Pulling fs layer 14:55:40 6394804c2196 Pulling fs layer 14:55:40 e5d7009d9e55 Waiting 14:55:40 1ec5fb03eaee Waiting 14:55:40 d3165a332ae3 Waiting 14:55:40 c124ba1a8b26 Waiting 14:55:40 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:40 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:40 5e06c6bed798 Downloading [==================================================>] 296B/296B 14:55:40 5e06c6bed798 Verifying Checksum 14:55:40 5e06c6bed798 Download complete 14:55:40 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:55:40 96e38c8865ba Downloading [> ] 539.6kB/71.91MB 14:55:40 684be6598fc9 Downloading [=> ] 3.001kB/127.5kB 14:55:40 684be6598fc9 Downloading [==================================================>] 127.5kB/127.5kB 14:55:40 684be6598fc9 Verifying Checksum 14:55:40 684be6598fc9 Download complete 14:55:40 0d92cad902ba Downloading [==================================================>] 1.148kB/1.148kB 14:55:40 0d92cad902ba Verifying Checksum 14:55:40 0d92cad902ba Download complete 14:55:40 da9db072f522 Pulling fs layer 14:55:40 56aca8a42329 Pulling fs layer 14:55:40 fbe227156a9a Pulling fs layer 14:55:40 b56567b07821 Pulling fs layer 14:55:40 f243361b999b Pulling fs layer 14:55:40 7abf0dc59d35 Pulling fs layer 14:55:40 991de477d40a Pulling fs layer 14:55:40 5efc16ba9cdc Pulling fs layer 14:55:40 b56567b07821 Waiting 14:55:40 f243361b999b Waiting 14:55:40 7abf0dc59d35 Waiting 14:55:40 da9db072f522 Downloading [> ] 48.06kB/3.624MB 14:55:40 56aca8a42329 Waiting 14:55:40 991de477d40a Waiting 14:55:40 5efc16ba9cdc Waiting 14:55:40 da9db072f522 Download complete 14:55:40 da9db072f522 Downloading [==================================================>] 3.624MB/3.624MB 14:55:40 da9db072f522 Download complete 14:55:40 da9db072f522 Download complete 14:55:40 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:40 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:40 da9db072f522 Extracting [> ] 65.54kB/3.624MB 14:55:40 eb7cda286a15 Downloading [==================================================>] 1.119kB/1.119kB 14:55:40 eb7cda286a15 Verifying Checksum 14:55:40 eb7cda286a15 Download complete 14:55:40 dcc0c3b2850c Downloading [> ] 539.6kB/76.12MB 14:55:40 2d429b9e73a6 Pulling fs layer 14:55:40 46eab5b44a35 Pulling fs layer 14:55:40 c4d302cc468d Pulling fs layer 14:55:40 01e0882c90d9 Pulling fs layer 14:55:40 531ee2cf3c0c Pulling fs layer 14:55:40 ed54a7dee1d8 Pulling fs layer 14:55:40 12c5c803443f Pulling fs layer 14:55:40 e27c75a98748 Pulling fs layer 14:55:40 e73cb4a42719 Pulling fs layer 14:55:40 46eab5b44a35 Waiting 14:55:40 a83b68436f09 Pulling fs layer 14:55:40 c4d302cc468d Waiting 14:55:40 787d6bee9571 Pulling fs layer 14:55:40 13ff0988aaea Pulling fs layer 14:55:40 531ee2cf3c0c Waiting 14:55:40 01e0882c90d9 Waiting 14:55:40 4b82842ab819 Pulling fs layer 14:55:40 7e568a0dc8fb Pulling fs layer 14:55:40 ed54a7dee1d8 Waiting 14:55:40 e27c75a98748 Waiting 14:55:40 12c5c803443f Waiting 14:55:40 787d6bee9571 Waiting 14:55:40 13ff0988aaea Waiting 14:55:40 e73cb4a42719 Waiting 14:55:40 a83b68436f09 Waiting 14:55:40 2d429b9e73a6 Waiting 14:55:40 7e568a0dc8fb Waiting 14:55:40 f90c8eb4724c Downloading [> ] 310.2kB/30.59MB 14:55:40 1e017ebebdbd Pulling fs layer 14:55:40 55f2b468da67 Pulling fs layer 14:55:40 82bfc142787e Pulling fs layer 14:55:40 46baca71a4ef Pulling fs layer 14:55:40 b0e0ef7895f4 Pulling fs layer 14:55:40 c0c90eeb8aca Pulling fs layer 14:55:40 5cfb27c10ea5 Pulling fs layer 14:55:40 40a5eed61bb0 Pulling fs layer 14:55:40 e040ea11fa10 Pulling fs layer 14:55:40 09d5a3f70313 Pulling fs layer 14:55:40 356f5c2c843b Pulling fs layer 14:55:40 1e017ebebdbd Waiting 14:55:40 55f2b468da67 Waiting 14:55:40 46baca71a4ef Waiting 14:55:40 b0e0ef7895f4 Waiting 14:55:40 e040ea11fa10 Waiting 14:55:40 82bfc142787e Waiting 14:55:40 09d5a3f70313 Waiting 14:55:40 356f5c2c843b Waiting 14:55:40 5cfb27c10ea5 Waiting 14:55:40 40a5eed61bb0 Waiting 14:55:40 c0c90eeb8aca Waiting 14:55:40 eca0188f477e Pulling fs layer 14:55:40 e444bcd4d577 Pulling fs layer 14:55:40 eabd8714fec9 Pulling fs layer 14:55:40 45fd2fec8a19 Pulling fs layer 14:55:40 eca0188f477e Waiting 14:55:40 8f10199ed94b Pulling fs layer 14:55:40 e444bcd4d577 Waiting 14:55:40 f963a77d2726 Pulling fs layer 14:55:40 eabd8714fec9 Waiting 14:55:40 f3a82e9f1761 Pulling fs layer 14:55:40 45fd2fec8a19 Waiting 14:55:40 79161a3f5362 Pulling fs layer 14:55:40 8f10199ed94b Waiting 14:55:40 9c266ba63f51 Pulling fs layer 14:55:40 f3a82e9f1761 Waiting 14:55:40 2e8a7df9c2ee Pulling fs layer 14:55:40 79161a3f5362 Waiting 14:55:40 10f05dd8b1db Pulling fs layer 14:55:40 9c266ba63f51 Waiting 14:55:40 41dac8b43ba6 Pulling fs layer 14:55:40 2e8a7df9c2ee Waiting 14:55:40 71a9f6a9ab4d Pulling fs layer 14:55:40 10f05dd8b1db Waiting 14:55:40 da3ed5db7103 Pulling fs layer 14:55:40 c955f6e31a04 Pulling fs layer 14:55:40 da3ed5db7103 Waiting 14:55:40 c955f6e31a04 Waiting 14:55:40 71a9f6a9ab4d Waiting 14:55:40 41dac8b43ba6 Waiting 14:55:40 f18232174bc9 Pulling fs layer 14:55:40 65babbe3dfe5 Pulling fs layer 14:55:40 651b0ba49b07 Pulling fs layer 14:55:40 d953cde4314b Pulling fs layer 14:55:40 aecd4cb03450 Pulling fs layer 14:55:40 13fa68ca8757 Pulling fs layer 14:55:40 f836d47fdc4d Pulling fs layer 14:55:40 8b5292c940e1 Pulling fs layer 14:55:40 454a4350d439 Pulling fs layer 14:55:40 9a8c18aee5ea Pulling fs layer 14:55:40 651b0ba49b07 Waiting 14:55:40 13fa68ca8757 Waiting 14:55:40 d953cde4314b Waiting 14:55:40 aecd4cb03450 Waiting 14:55:40 f836d47fdc4d Waiting 14:55:40 8b5292c940e1 Waiting 14:55:40 454a4350d439 Waiting 14:55:40 f18232174bc9 Waiting 14:55:40 9a8c18aee5ea Waiting 14:55:40 65babbe3dfe5 Waiting 14:55:40 96e38c8865ba Downloading [=======> ] 11.35MB/71.91MB 14:55:40 96e38c8865ba Downloading [=======> ] 11.35MB/71.91MB 14:55:40 9fa9226be034 Pulling fs layer 14:55:40 1617e25568b2 Pulling fs layer 14:55:40 6ac0e4adf315 Pulling fs layer 14:55:40 f3b09c502777 Pulling fs layer 14:55:40 408012a7b118 Pulling fs layer 14:55:40 44986281b8b9 Pulling fs layer 14:55:40 bf70c5107ab5 Pulling fs layer 14:55:40 1ccde423731d Pulling fs layer 14:55:40 7221d93db8a9 Pulling fs layer 14:55:40 408012a7b118 Waiting 14:55:40 44986281b8b9 Waiting 14:55:40 6ac0e4adf315 Waiting 14:55:40 f3b09c502777 Waiting 14:55:40 1617e25568b2 Waiting 14:55:40 bf70c5107ab5 Waiting 14:55:40 1ccde423731d Waiting 14:55:40 7df673c7455d Pulling fs layer 14:55:40 7221d93db8a9 Waiting 14:55:40 7df673c7455d Waiting 14:55:40 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 14:55:40 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 14:55:40 da9db072f522 Extracting [==========> ] 786.4kB/3.624MB 14:55:40 dcc0c3b2850c Downloading [====> ] 7.568MB/76.12MB 14:55:40 f90c8eb4724c Downloading [==========> ] 6.536MB/30.59MB 14:55:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:40 da9db072f522 Extracting [==================================================>] 3.624MB/3.624MB 14:55:40 96e38c8865ba Downloading [==================> ] 25.95MB/71.91MB 14:55:40 96e38c8865ba Downloading [==================> ] 25.95MB/71.91MB 14:55:40 dcc0c3b2850c Downloading [=========> ] 15.14MB/76.12MB 14:55:40 da9db072f522 Pull complete 14:55:40 da9db072f522 Pull complete 14:55:40 da9db072f522 Pull complete 14:55:40 f90c8eb4724c Downloading [====================> ] 12.45MB/30.59MB 14:55:40 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB 14:55:40 96e38c8865ba Downloading [===========================> ] 40.01MB/71.91MB 14:55:40 dcc0c3b2850c Downloading [===============> ] 23.79MB/76.12MB 14:55:40 f90c8eb4724c Downloading [===================================> ] 21.48MB/30.59MB 14:55:40 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 14:55:40 96e38c8865ba Downloading [======================================> ] 55.15MB/71.91MB 14:55:41 f90c8eb4724c Download complete 14:55:41 dcc0c3b2850c Downloading [=======================> ] 35.14MB/76.12MB 14:55:41 2b1b549e99de Downloading [> ] 31.67kB/2.646MB 14:55:41 96e38c8865ba Downloading [=================================================> ] 71.37MB/71.91MB 14:55:41 96e38c8865ba Downloading [=================================================> ] 71.37MB/71.91MB 14:55:41 96e38c8865ba Verifying Checksum 14:55:41 96e38c8865ba Download complete 14:55:41 96e38c8865ba Verifying Checksum 14:55:41 96e38c8865ba Download complete 14:55:41 547372ea8ffa Downloading [> ] 130kB/12.63MB 14:55:41 f90c8eb4724c Extracting [> ] 327.7kB/30.59MB 14:55:41 dcc0c3b2850c Downloading [=================================> ] 50.28MB/76.12MB 14:55:41 2b1b549e99de Verifying Checksum 14:55:41 2b1b549e99de Download complete 14:55:41 65d25c0f02f3 Downloading [> ] 293.8kB/28.98MB 14:55:41 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:55:41 96e38c8865ba Extracting [> ] 557.1kB/71.91MB 14:55:41 547372ea8ffa Downloading [=================> ] 4.324MB/12.63MB 14:55:41 f90c8eb4724c Extracting [======> ] 3.932MB/30.59MB 14:55:41 dcc0c3b2850c Downloading [===========================================> ] 65.96MB/76.12MB 14:55:41 65d25c0f02f3 Downloading [============> ] 7.077MB/28.98MB 14:55:41 dcc0c3b2850c Verifying Checksum 14:55:41 dcc0c3b2850c Download complete 14:55:41 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 14:55:41 96e38c8865ba Extracting [===> ] 4.456MB/71.91MB 14:55:41 f90c8eb4724c Extracting [=============> ] 8.192MB/30.59MB 14:55:41 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 14:55:41 96e38c8865ba Extracting [======> ] 10.03MB/71.91MB 14:55:41 f90c8eb4724c Extracting [===================> ] 12.12MB/30.59MB 14:55:41 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 14:55:41 96e38c8865ba Extracting [===========> ] 16.15MB/71.91MB 14:55:41 f90c8eb4724c Extracting [=============================> ] 18.02MB/30.59MB 14:55:41 f90c8eb4724c Extracting [=====================================> ] 22.94MB/30.59MB 14:55:41 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 14:55:41 96e38c8865ba Extracting [===============> ] 22.84MB/71.91MB 14:55:41 f90c8eb4724c Extracting [==========================================> ] 26.21MB/30.59MB 14:55:41 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 14:55:41 96e38c8865ba Extracting [===================> ] 28.41MB/71.91MB 14:55:41 f90c8eb4724c Extracting [================================================> ] 29.82MB/30.59MB 14:55:41 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 14:55:41 96e38c8865ba Extracting [=======================> ] 33.98MB/71.91MB 14:55:41 f90c8eb4724c Extracting [==================================================>] 30.59MB/30.59MB 14:55:41 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 14:55:41 96e38c8865ba Extracting [==========================> ] 37.88MB/71.91MB 14:55:42 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 14:55:42 96e38c8865ba Extracting [==============================> ] 43.45MB/71.91MB 14:55:42 f90c8eb4724c Pull complete 14:55:42 2b1b549e99de Extracting [> ] 32.77kB/2.646MB 14:55:42 96e38c8865ba Extracting [==================================> ] 49.02MB/71.91MB 14:55:42 96e38c8865ba Extracting [==================================> ] 49.02MB/71.91MB 14:55:42 2b1b549e99de Extracting [======> ] 327.7kB/2.646MB 14:55:42 2b1b549e99de Extracting [==================================================>] 2.646MB/2.646MB 14:55:42 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 14:55:42 96e38c8865ba Extracting [====================================> ] 52.36MB/71.91MB 14:55:42 2b1b549e99de Pull complete 14:55:42 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 14:55:42 96e38c8865ba Extracting [=======================================> ] 56.82MB/71.91MB 14:55:42 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 14:55:42 96e38c8865ba Extracting [============================================> ] 64.06MB/71.91MB 14:55:42 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 14:55:42 96e38c8865ba Extracting [================================================> ] 69.63MB/71.91MB 14:55:42 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:55:42 96e38c8865ba Extracting [==================================================>] 71.91MB/71.91MB 14:55:42 96e38c8865ba Pull complete 14:55:42 96e38c8865ba Pull complete 14:55:42 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:55:42 5e06c6bed798 Extracting [==================================================>] 296B/296B 14:55:42 5e06c6bed798 Pull complete 14:55:42 684be6598fc9 Extracting [============> ] 32.77kB/127.5kB 14:55:42 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:55:42 684be6598fc9 Extracting [==================================================>] 127.5kB/127.5kB 14:55:43 684be6598fc9 Pull complete 14:55:43 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:55:43 0d92cad902ba Extracting [==================================================>] 1.148kB/1.148kB 14:55:43 547372ea8ffa Downloading [=================================================> ] 12.58MB/12.63MB 14:55:43 547372ea8ffa Verifying Checksum 14:55:43 547372ea8ffa Download complete 14:55:43 547372ea8ffa Extracting [> ] 131.1kB/12.63MB 14:55:43 65d25c0f02f3 Downloading [================> ] 9.731MB/28.98MB 14:55:43 4f4fb700ef54 Downloading [==================================================>] 32B/32B 14:55:43 4f4fb700ef54 Verifying Checksum 14:55:43 4f4fb700ef54 Download complete 14:55:43 90dd78f85976 Downloading [> ] 424.9kB/41.49MB 14:55:43 e5d7009d9e55 Downloading [==================================================>] 295B/295B 14:55:43 e5d7009d9e55 Verifying Checksum 14:55:43 e5d7009d9e55 Download complete 14:55:43 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:55:43 e5d7009d9e55 Extracting [==================================================>] 295B/295B 14:55:43 1ec5fb03eaee Downloading [=> ] 3.001kB/127kB 14:55:43 1ec5fb03eaee Download complete 14:55:43 0d92cad902ba Pull complete 14:55:43 d3165a332ae3 Downloading [==================================================>] 1.328kB/1.328kB 14:55:43 d3165a332ae3 Verifying Checksum 14:55:43 d3165a332ae3 Download complete 14:55:43 547372ea8ffa Extracting [=> ] 393.2kB/12.63MB 14:55:43 c124ba1a8b26 Downloading [> ] 539.6kB/91.87MB 14:55:43 65d25c0f02f3 Downloading [================================> ] 18.87MB/28.98MB 14:55:43 90dd78f85976 Downloading [========> ] 7.241MB/41.49MB 14:55:43 e5d7009d9e55 Pull complete 14:55:43 1ec5fb03eaee Extracting [============> ] 32.77kB/127kB 14:55:43 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:55:43 1ec5fb03eaee Extracting [==================================================>] 127kB/127kB 14:55:43 dcc0c3b2850c Extracting [> ] 557.1kB/76.12MB 14:55:43 65d25c0f02f3 Verifying Checksum 14:55:43 65d25c0f02f3 Download complete 14:55:43 6394804c2196 Downloading [==================================================>] 1.299kB/1.299kB 14:55:43 6394804c2196 Verifying Checksum 14:55:43 6394804c2196 Download complete 14:55:43 547372ea8ffa Extracting [===========> ] 3.015MB/12.63MB 14:55:43 c124ba1a8b26 Downloading [====> ] 8.109MB/91.87MB 14:55:43 90dd78f85976 Downloading [======================> ] 18.74MB/41.49MB 14:55:43 56aca8a42329 Downloading [> ] 539.6kB/71.91MB 14:55:43 dcc0c3b2850c Extracting [=======> ] 11.14MB/76.12MB 14:55:43 1ec5fb03eaee Pull complete 14:55:43 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:55:43 d3165a332ae3 Extracting [==================================================>] 1.328kB/1.328kB 14:55:43 547372ea8ffa Extracting [==============================> ] 7.602MB/12.63MB 14:55:43 c124ba1a8b26 Downloading [==========> ] 19.46MB/91.87MB 14:55:43 90dd78f85976 Downloading [=======================================> ] 32.8MB/41.49MB 14:55:43 56aca8a42329 Downloading [==> ] 3.784MB/71.91MB 14:55:43 dcc0c3b2850c Extracting [==============> ] 21.73MB/76.12MB 14:55:43 90dd78f85976 Verifying Checksum 14:55:43 90dd78f85976 Download complete 14:55:43 547372ea8ffa Extracting [==========================================> ] 10.75MB/12.63MB 14:55:43 c124ba1a8b26 Downloading [=================> ] 32.44MB/91.87MB 14:55:43 fbe227156a9a Downloading [> ] 146.4kB/14.63MB 14:55:43 d3165a332ae3 Pull complete 14:55:43 56aca8a42329 Downloading [=======> ] 10.81MB/71.91MB 14:55:43 dcc0c3b2850c Extracting [====================> ] 31.2MB/76.12MB 14:55:43 547372ea8ffa Extracting [==================================================>] 12.63MB/12.63MB 14:55:43 547372ea8ffa Pull complete 14:55:43 c124ba1a8b26 Downloading [=========================> ] 47.04MB/91.87MB 14:55:43 fbe227156a9a Downloading [==========> ] 3.095MB/14.63MB 14:55:43 56aca8a42329 Downloading [=============> ] 19.46MB/71.91MB 14:55:43 dcc0c3b2850c Extracting [=============================> ] 45.12MB/76.12MB 14:55:43 65d25c0f02f3 Extracting [> ] 294.9kB/28.98MB 14:55:43 c124ba1a8b26 Downloading [==================================> ] 63.26MB/91.87MB 14:55:43 fbe227156a9a Downloading [===========================> ] 7.962MB/14.63MB 14:55:43 56aca8a42329 Downloading [===================> ] 28.11MB/71.91MB 14:55:43 dcc0c3b2850c Extracting [=====================================> ] 56.82MB/76.12MB 14:55:43 65d25c0f02f3 Extracting [========> ] 4.719MB/28.98MB 14:55:43 fbe227156a9a Verifying Checksum 14:55:43 fbe227156a9a Download complete 14:55:43 c124ba1a8b26 Downloading [==========================================> ] 78.94MB/91.87MB 14:55:43 b56567b07821 Downloading [==================================================>] 1.077kB/1.077kB 14:55:43 b56567b07821 Verifying Checksum 14:55:43 b56567b07821 Download complete 14:55:43 f243361b999b Downloading [============================> ] 3.003kB/5.242kB 14:55:43 f243361b999b Downloading [==================================================>] 5.242kB/5.242kB 14:55:43 f243361b999b Verifying Checksum 14:55:43 f243361b999b Download complete 14:55:43 56aca8a42329 Downloading [==============================> ] 44.33MB/71.91MB 14:55:43 dcc0c3b2850c Extracting [=============================================> ] 69.63MB/76.12MB 14:55:43 7abf0dc59d35 Downloading [==================================================>] 1.035kB/1.035kB 14:55:43 7abf0dc59d35 Verifying Checksum 14:55:43 7abf0dc59d35 Download complete 14:55:43 c124ba1a8b26 Verifying Checksum 14:55:43 c124ba1a8b26 Download complete 14:55:43 991de477d40a Downloading [==================================================>] 1.035kB/1.035kB 14:55:43 991de477d40a Verifying Checksum 14:55:43 991de477d40a Download complete 14:55:43 65d25c0f02f3 Extracting [==================> ] 10.62MB/28.98MB 14:55:43 dcc0c3b2850c Extracting [==================================================>] 76.12MB/76.12MB 14:55:43 5efc16ba9cdc Downloading [=======> ] 3.002kB/19.52kB 14:55:43 5efc16ba9cdc Downloading [==================================================>] 19.52kB/19.52kB 14:55:43 5efc16ba9cdc Verifying Checksum 14:55:43 5efc16ba9cdc Download complete 14:55:43 2d429b9e73a6 Downloading [> ] 293.8kB/29.13MB 14:55:44 46eab5b44a35 Downloading [==================================================>] 1.168kB/1.168kB 14:55:44 46eab5b44a35 Verifying Checksum 14:55:44 46eab5b44a35 Download complete 14:55:44 c4d302cc468d Downloading [> ] 48.06kB/4.534MB 14:55:44 56aca8a42329 Downloading [========================================> ] 58.93MB/71.91MB 14:55:44 c124ba1a8b26 Extracting [> ] 557.1kB/91.87MB 14:55:44 65d25c0f02f3 Extracting [=======================> ] 13.86MB/28.98MB 14:55:44 dcc0c3b2850c Pull complete 14:55:44 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:55:44 eb7cda286a15 Extracting [==================================================>] 1.119kB/1.119kB 14:55:44 2d429b9e73a6 Downloading [===============> ] 8.846MB/29.13MB 14:55:44 c4d302cc468d Verifying Checksum 14:55:44 c4d302cc468d Download complete 14:55:44 56aca8a42329 Download complete 14:55:44 01e0882c90d9 Downloading [> ] 15.3kB/1.447MB 14:55:44 531ee2cf3c0c Downloading [> ] 80.83kB/8.066MB 14:55:44 01e0882c90d9 Verifying Checksum 14:55:44 01e0882c90d9 Download complete 14:55:44 ed54a7dee1d8 Downloading [> ] 15.3kB/1.196MB 14:55:44 c124ba1a8b26 Extracting [=====> ] 10.58MB/91.87MB 14:55:44 65d25c0f02f3 Extracting [===============================> ] 17.99MB/28.98MB 14:55:44 ed54a7dee1d8 Verifying Checksum 14:55:44 ed54a7dee1d8 Download complete 14:55:44 2d429b9e73a6 Downloading [=======================================> ] 23.3MB/29.13MB 14:55:44 12c5c803443f Download complete 14:55:44 56aca8a42329 Extracting [> ] 557.1kB/71.91MB 14:55:44 531ee2cf3c0c Downloading [================================> ] 5.324MB/8.066MB 14:55:44 eb7cda286a15 Pull complete 14:55:44 e27c75a98748 Downloading [===============================================> ] 3.011kB/3.144kB 14:55:44 e27c75a98748 Downloading [==================================================>] 3.144kB/3.144kB 14:55:44 e27c75a98748 Verifying Checksum 14:55:44 e27c75a98748 Download complete 14:55:44 2d429b9e73a6 Verifying Checksum 14:55:44 2d429b9e73a6 Download complete 14:55:44 api Pulled 14:55:44 e73cb4a42719 Downloading [> ] 539.6kB/109.1MB 14:55:44 531ee2cf3c0c Verifying Checksum 14:55:44 531ee2cf3c0c Download complete 14:55:44 a83b68436f09 Downloading [===============> ] 3.011kB/9.919kB 14:55:44 a83b68436f09 Downloading [==================================================>] 9.919kB/9.919kB 14:55:44 a83b68436f09 Verifying Checksum 14:55:44 a83b68436f09 Download complete 14:55:44 c124ba1a8b26 Extracting [===========> ] 21.17MB/91.87MB 14:55:44 787d6bee9571 Downloading [==================================================>] 127B/127B 14:55:44 787d6bee9571 Download complete 14:55:44 13ff0988aaea Downloading [==================================================>] 167B/167B 14:55:44 13ff0988aaea Verifying Checksum 14:55:44 13ff0988aaea Download complete 14:55:44 65d25c0f02f3 Extracting [============================================> ] 25.95MB/28.98MB 14:55:44 4b82842ab819 Downloading [===========================> ] 3.011kB/5.415kB 14:55:44 4b82842ab819 Downloading [==================================================>] 5.415kB/5.415kB 14:55:44 4b82842ab819 Verifying Checksum 14:55:44 4b82842ab819 Download complete 14:55:44 7e568a0dc8fb Downloading [==================================================>] 184B/184B 14:55:44 7e568a0dc8fb Verifying Checksum 14:55:44 7e568a0dc8fb Download complete 14:55:44 1e017ebebdbd Downloading [> ] 375.7kB/37.19MB 14:55:44 65d25c0f02f3 Extracting [==================================================>] 28.98MB/28.98MB 14:55:44 55f2b468da67 Downloading [> ] 539.6kB/257.9MB 14:55:44 56aca8a42329 Extracting [==> ] 3.342MB/71.91MB 14:55:44 2d429b9e73a6 Extracting [> ] 294.9kB/29.13MB 14:55:44 e73cb4a42719 Downloading [===> ] 8.65MB/109.1MB 14:55:44 c124ba1a8b26 Extracting [=================> ] 32.31MB/91.87MB 14:55:44 65d25c0f02f3 Pull complete 14:55:44 1e017ebebdbd Downloading [============> ] 9.043MB/37.19MB 14:55:44 55f2b468da67 Downloading [=> ] 7.568MB/257.9MB 14:55:44 56aca8a42329 Extracting [===> ] 5.571MB/71.91MB 14:55:44 2d429b9e73a6 Extracting [=====> ] 3.244MB/29.13MB 14:55:44 e73cb4a42719 Downloading [=========> ] 20.54MB/109.1MB 14:55:44 c124ba1a8b26 Extracting [=======================> ] 42.34MB/91.87MB 14:55:44 90dd78f85976 Extracting [> ] 426kB/41.49MB 14:55:44 1e017ebebdbd Downloading [==========================> ] 19.59MB/37.19MB 14:55:44 55f2b468da67 Downloading [==> ] 14.6MB/257.9MB 14:55:44 56aca8a42329 Extracting [======> ] 8.913MB/71.91MB 14:55:44 2d429b9e73a6 Extracting [===========> ] 6.488MB/29.13MB 14:55:44 e73cb4a42719 Downloading [==============> ] 32.44MB/109.1MB 14:55:44 c124ba1a8b26 Extracting [============================> ] 51.81MB/91.87MB 14:55:44 90dd78f85976 Extracting [====> ] 3.834MB/41.49MB 14:55:44 1e017ebebdbd Downloading [==========================================> ] 31.28MB/37.19MB 14:55:44 55f2b468da67 Downloading [====> ] 23.79MB/257.9MB 14:55:44 1e017ebebdbd Verifying Checksum 14:55:44 1e017ebebdbd Download complete 14:55:44 56aca8a42329 Extracting [========> ] 12.81MB/71.91MB 14:55:44 2d429b9e73a6 Extracting [===============> ] 9.142MB/29.13MB 14:55:44 e73cb4a42719 Downloading [=====================> ] 46.5MB/109.1MB 14:55:44 c124ba1a8b26 Extracting [===================================> ] 64.62MB/91.87MB 14:55:44 82bfc142787e Downloading [> ] 97.22kB/8.613MB 14:55:44 90dd78f85976 Extracting [=======> ] 6.39MB/41.49MB 14:55:44 55f2b468da67 Downloading [======> ] 33.52MB/257.9MB 14:55:44 e73cb4a42719 Downloading [==========================> ] 57.31MB/109.1MB 14:55:44 82bfc142787e Downloading [======================================> ] 6.585MB/8.613MB 14:55:44 2d429b9e73a6 Extracting [===================> ] 11.5MB/29.13MB 14:55:44 c124ba1a8b26 Extracting [======================================> ] 70.19MB/91.87MB 14:55:44 56aca8a42329 Extracting [===========> ] 16.71MB/71.91MB 14:55:44 90dd78f85976 Extracting [==========> ] 8.52MB/41.49MB 14:55:44 1e017ebebdbd Extracting [> ] 393.2kB/37.19MB 14:55:44 82bfc142787e Verifying Checksum 14:55:44 82bfc142787e Download complete 14:55:44 55f2b468da67 Downloading [========> ] 43.79MB/257.9MB 14:55:44 46baca71a4ef Downloading [========> ] 3.01kB/18.11kB 14:55:44 46baca71a4ef Downloading [==================================================>] 18.11kB/18.11kB 14:55:44 46baca71a4ef Verifying Checksum 14:55:44 46baca71a4ef Download complete 14:55:44 e73cb4a42719 Downloading [===============================> ] 68.66MB/109.1MB 14:55:44 2d429b9e73a6 Extracting [==========================> ] 15.63MB/29.13MB 14:55:44 c124ba1a8b26 Extracting [===========================================> ] 79.66MB/91.87MB 14:55:44 56aca8a42329 Extracting [==============> ] 20.61MB/71.91MB 14:55:44 90dd78f85976 Extracting [============> ] 10.65MB/41.49MB 14:55:44 1e017ebebdbd Extracting [=====> ] 4.325MB/37.19MB 14:55:44 b0e0ef7895f4 Downloading [> ] 375.7kB/37.01MB 14:55:44 55f2b468da67 Downloading [===========> ] 58.39MB/257.9MB 14:55:44 e73cb4a42719 Downloading [======================================> ] 84.88MB/109.1MB 14:55:44 2d429b9e73a6 Extracting [=================================> ] 19.76MB/29.13MB 14:55:44 c124ba1a8b26 Extracting [===============================================> ] 86.9MB/91.87MB 14:55:45 56aca8a42329 Extracting [================> ] 23.4MB/71.91MB 14:55:45 90dd78f85976 Extracting [===============> ] 12.78MB/41.49MB 14:55:45 c124ba1a8b26 Extracting [==================================================>] 91.87MB/91.87MB 14:55:45 1e017ebebdbd Extracting [========> ] 6.685MB/37.19MB 14:55:45 b0e0ef7895f4 Downloading [=====> ] 4.144MB/37.01MB 14:55:45 55f2b468da67 Downloading [==============> ] 74.07MB/257.9MB 14:55:45 e73cb4a42719 Downloading [============================================> ] 97.32MB/109.1MB 14:55:45 2d429b9e73a6 Extracting [======================================> ] 22.41MB/29.13MB 14:55:45 56aca8a42329 Extracting [==================> ] 27.3MB/71.91MB 14:55:45 90dd78f85976 Extracting [====================> ] 16.61MB/41.49MB 14:55:45 b0e0ef7895f4 Downloading [===============> ] 11.3MB/37.01MB 14:55:45 1e017ebebdbd Extracting [=============> ] 10.22MB/37.19MB 14:55:45 55f2b468da67 Downloading [================> ] 85.97MB/257.9MB 14:55:45 c124ba1a8b26 Pull complete 14:55:45 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:55:45 6394804c2196 Extracting [==================================================>] 1.299kB/1.299kB 14:55:45 e73cb4a42719 Verifying Checksum 14:55:45 e73cb4a42719 Download complete 14:55:45 c0c90eeb8aca Downloading [==================================================>] 1.105kB/1.105kB 14:55:45 c0c90eeb8aca Verifying Checksum 14:55:45 c0c90eeb8aca Download complete 14:55:45 2d429b9e73a6 Extracting [==========================================> ] 24.77MB/29.13MB 14:55:45 90dd78f85976 Extracting [======================> ] 18.74MB/41.49MB 14:55:45 56aca8a42329 Extracting [======================> ] 31.75MB/71.91MB 14:55:45 5cfb27c10ea5 Downloading [==================================================>] 852B/852B 14:55:45 5cfb27c10ea5 Verifying Checksum 14:55:45 5cfb27c10ea5 Download complete 14:55:45 40a5eed61bb0 Download complete 14:55:45 b0e0ef7895f4 Downloading [=====================> ] 16.2MB/37.01MB 14:55:45 1e017ebebdbd Extracting [===================> ] 14.16MB/37.19MB 14:55:45 55f2b468da67 Downloading [===================> ] 100MB/257.9MB 14:55:45 e040ea11fa10 Downloading [==================================================>] 173B/173B 14:55:45 e040ea11fa10 Verifying Checksum 14:55:45 e040ea11fa10 Download complete 14:55:45 09d5a3f70313 Downloading [> ] 539.6kB/109.2MB 14:55:45 6394804c2196 Pull complete 14:55:45 2d429b9e73a6 Extracting [===============================================> ] 27.72MB/29.13MB 14:55:45 56aca8a42329 Extracting [========================> ] 34.54MB/71.91MB 14:55:45 90dd78f85976 Extracting [=================================> ] 28.11MB/41.49MB 14:55:45 b0e0ef7895f4 Downloading [===================================> ] 26MB/37.01MB 14:55:45 55f2b468da67 Downloading [======================> ] 113.5MB/257.9MB 14:55:45 1e017ebebdbd Extracting [=========================> ] 18.87MB/37.19MB 14:55:45 09d5a3f70313 Downloading [=> ] 3.243MB/109.2MB 14:55:45 pap Pulled 14:55:45 b0e0ef7895f4 Verifying Checksum 14:55:45 b0e0ef7895f4 Download complete 14:55:45 56aca8a42329 Extracting [=========================> ] 37.32MB/71.91MB 14:55:45 90dd78f85976 Extracting [=====================================> ] 31.1MB/41.49MB 14:55:45 55f2b468da67 Downloading [=========================> ] 129.2MB/257.9MB 14:55:45 356f5c2c843b Downloading [=========================================> ] 3.011kB/3.623kB 14:55:45 356f5c2c843b Downloading [==================================================>] 3.623kB/3.623kB 14:55:45 356f5c2c843b Verifying Checksum 14:55:45 356f5c2c843b Download complete 14:55:45 1e017ebebdbd Extracting [================================> ] 23.99MB/37.19MB 14:55:45 2d429b9e73a6 Extracting [================================================> ] 28.31MB/29.13MB 14:55:45 eca0188f477e Downloading [> ] 375.7kB/37.17MB 14:55:45 09d5a3f70313 Downloading [=====> ] 11.89MB/109.2MB 14:55:45 2d429b9e73a6 Extracting [==================================================>] 29.13MB/29.13MB 14:55:45 55f2b468da67 Downloading [===========================> ] 143.8MB/257.9MB 14:55:45 90dd78f85976 Extracting [===========================================> ] 35.78MB/41.49MB 14:55:45 56aca8a42329 Extracting [=============================> ] 42.34MB/71.91MB 14:55:45 1e017ebebdbd Extracting [======================================> ] 28.7MB/37.19MB 14:55:45 eca0188f477e Downloading [=====> ] 4.144MB/37.17MB 14:55:45 09d5a3f70313 Downloading [===========> ] 25.95MB/109.2MB 14:55:45 2d429b9e73a6 Pull complete 14:55:45 55f2b468da67 Downloading [=============================> ] 154.1MB/257.9MB 14:55:45 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:55:45 46eab5b44a35 Extracting [==================================================>] 1.168kB/1.168kB 14:55:45 90dd78f85976 Extracting [==============================================> ] 38.34MB/41.49MB 14:55:45 56aca8a42329 Extracting [===============================> ] 45.12MB/71.91MB 14:55:45 1e017ebebdbd Extracting [===========================================> ] 32.24MB/37.19MB 14:55:45 eca0188f477e Downloading [==============> ] 10.55MB/37.17MB 14:55:45 09d5a3f70313 Downloading [===============> ] 34.6MB/109.2MB 14:55:45 55f2b468da67 Downloading [================================> ] 168.1MB/257.9MB 14:55:45 90dd78f85976 Extracting [=================================================> ] 41.32MB/41.49MB 14:55:45 56aca8a42329 Extracting [=================================> ] 48.46MB/71.91MB 14:55:45 90dd78f85976 Extracting [==================================================>] 41.49MB/41.49MB 14:55:45 1e017ebebdbd Extracting [==============================================> ] 34.6MB/37.19MB 14:55:45 eca0188f477e Downloading [===============================> ] 23.74MB/37.17MB 14:55:45 09d5a3f70313 Downloading [=====================> ] 47.04MB/109.2MB 14:55:45 1e017ebebdbd Extracting [===============================================> ] 35.39MB/37.19MB 14:55:45 09d5a3f70313 Downloading [======================> ] 48.12MB/109.2MB 14:55:45 55f2b468da67 Downloading [==================================> ] 176.3MB/257.9MB 14:55:45 eca0188f477e Downloading [===================================> ] 26.75MB/37.17MB 14:55:45 90dd78f85976 Pull complete 14:55:45 56aca8a42329 Extracting [==================================> ] 49.58MB/71.91MB 14:55:45 4f4fb700ef54 Extracting [==================================================>] 32B/32B 14:55:45 4f4fb700ef54 Extracting [==================================================>] 32B/32B 14:55:45 46eab5b44a35 Pull complete 14:55:45 c4d302cc468d Extracting [> ] 65.54kB/4.534MB 14:55:46 1e017ebebdbd Extracting [================================================> ] 36.18MB/37.19MB 14:55:46 eca0188f477e Verifying Checksum 14:55:46 eca0188f477e Download complete 14:55:46 1e017ebebdbd Extracting [==================================================>] 37.19MB/37.19MB 14:55:46 09d5a3f70313 Downloading [==========================> ] 58.93MB/109.2MB 14:55:46 55f2b468da67 Downloading [====================================> ] 189.8MB/257.9MB 14:55:46 56aca8a42329 Extracting [===================================> ] 51.25MB/71.91MB 14:55:46 c4d302cc468d Extracting [===> ] 327.7kB/4.534MB 14:55:46 55f2b468da67 Downloading [======================================> ] 196.8MB/257.9MB 14:55:46 09d5a3f70313 Downloading [=============================> ] 63.8MB/109.2MB 14:55:46 4f4fb700ef54 Pull complete 14:55:46 56aca8a42329 Extracting [====================================> ] 51.81MB/71.91MB 14:55:46 eca0188f477e Extracting [> ] 393.2kB/37.17MB 14:55:46 c4d302cc468d Extracting [================> ] 1.507MB/4.534MB 14:55:46 09d5a3f70313 Downloading [===============================> ] 69.75MB/109.2MB 14:55:46 opa-pdp Pulled 14:55:46 55f2b468da67 Downloading [=======================================> ] 202.8MB/257.9MB 14:55:46 eca0188f477e Extracting [==> ] 1.966MB/37.17MB 14:55:46 1e017ebebdbd Pull complete 14:55:46 56aca8a42329 Extracting [====================================> ] 52.92MB/71.91MB 14:55:46 c4d302cc468d Extracting [=============================> ] 2.687MB/4.534MB 14:55:46 c4d302cc468d Extracting [==================================================>] 4.534MB/4.534MB 14:55:46 55f2b468da67 Downloading [=========================================> ] 215.2MB/257.9MB 14:55:46 eca0188f477e Extracting [========> ] 6.291MB/37.17MB 14:55:46 56aca8a42329 Extracting [======================================> ] 55.71MB/71.91MB 14:55:46 e444bcd4d577 Downloading [==================================================>] 279B/279B 14:55:46 e444bcd4d577 Verifying Checksum 14:55:46 e444bcd4d577 Download complete 14:55:46 09d5a3f70313 Downloading [===================================> ] 77.32MB/109.2MB 14:55:46 55f2b468da67 Downloading [==========================================> ] 220.1MB/257.9MB 14:55:46 eca0188f477e Extracting [========> ] 6.685MB/37.17MB 14:55:46 c4d302cc468d Pull complete 14:55:46 01e0882c90d9 Extracting [=> ] 32.77kB/1.447MB 14:55:46 56aca8a42329 Extracting [=======================================> ] 56.26MB/71.91MB 14:55:46 eabd8714fec9 Downloading [> ] 539.6kB/375MB 14:55:46 09d5a3f70313 Downloading [========================================> ] 89.21MB/109.2MB 14:55:46 55f2b468da67 Downloading [=============================================> ] 234.7MB/257.9MB 14:55:46 eca0188f477e Extracting [==============> ] 10.62MB/37.17MB 14:55:46 56aca8a42329 Extracting [==========================================> ] 60.72MB/71.91MB 14:55:46 01e0882c90d9 Extracting [==========> ] 294.9kB/1.447MB 14:55:46 eabd8714fec9 Downloading [> ] 5.406MB/375MB 14:55:46 01e0882c90d9 Extracting [==================================================>] 1.447MB/1.447MB 14:55:46 56aca8a42329 Extracting [===========================================> ] 62.95MB/71.91MB 14:55:46 55f2b468da67 Downloading [================================================> ] 249.2MB/257.9MB 14:55:46 09d5a3f70313 Downloading [================================================> ] 104.9MB/109.2MB 14:55:46 eca0188f477e Extracting [===================> ] 14.55MB/37.17MB 14:55:46 eabd8714fec9 Downloading [=> ] 10.27MB/375MB 14:55:46 01e0882c90d9 Pull complete 14:55:46 531ee2cf3c0c Extracting [> ] 98.3kB/8.066MB 14:55:46 55f2b468da67 Verifying Checksum 14:55:46 55f2b468da67 Download complete 14:55:46 09d5a3f70313 Verifying Checksum 14:55:46 09d5a3f70313 Download complete 14:55:46 56aca8a42329 Extracting [=============================================> ] 65.73MB/71.91MB 14:55:46 eabd8714fec9 Downloading [==> ] 16.76MB/375MB 14:55:46 eca0188f477e Extracting [===========================> ] 20.45MB/37.17MB 14:55:46 531ee2cf3c0c Extracting [=> ] 294.9kB/8.066MB 14:55:46 55f2b468da67 Extracting [> ] 557.1kB/257.9MB 14:55:46 56aca8a42329 Extracting [================================================> ] 70.19MB/71.91MB 14:55:46 eabd8714fec9 Downloading [====> ] 30.28MB/375MB 14:55:46 eca0188f477e Extracting [================================> ] 24.38MB/37.17MB 14:55:47 531ee2cf3c0c Extracting [===========================> ] 4.424MB/8.066MB 14:55:47 56aca8a42329 Extracting [==================================================>] 71.91MB/71.91MB 14:55:47 55f2b468da67 Extracting [=> ] 8.913MB/257.9MB 14:55:47 eabd8714fec9 Downloading [=====> ] 42.71MB/375MB 14:55:47 eca0188f477e Extracting [====================================> ] 27.13MB/37.17MB 14:55:47 eabd8714fec9 Downloading [======> ] 48.66MB/375MB 14:55:47 55f2b468da67 Extracting [===> ] 16.15MB/257.9MB 14:55:47 531ee2cf3c0c Extracting [=====================================> ] 5.997MB/8.066MB 14:55:47 eca0188f477e Extracting [=======================================> ] 29.1MB/37.17MB 14:55:47 56aca8a42329 Pull complete 14:55:47 fbe227156a9a Extracting [> ] 163.8kB/14.63MB 14:55:47 531ee2cf3c0c Extracting [================================================> ] 7.864MB/8.066MB 14:55:47 55f2b468da67 Extracting [===> ] 20.61MB/257.9MB 14:55:47 531ee2cf3c0c Extracting [==================================================>] 8.066MB/8.066MB 14:55:47 eca0188f477e Extracting [==========================================> ] 31.85MB/37.17MB 14:55:47 531ee2cf3c0c Pull complete 14:55:47 ed54a7dee1d8 Extracting [=> ] 32.77kB/1.196MB 14:55:47 fbe227156a9a Extracting [=> ] 327.7kB/14.63MB 14:55:47 eca0188f477e Extracting [=============================================> ] 33.82MB/37.17MB 14:55:47 55f2b468da67 Extracting [====> ] 23.4MB/257.9MB 14:55:47 ed54a7dee1d8 Extracting [==============================> ] 720.9kB/1.196MB 14:55:47 fbe227156a9a Extracting [========> ] 2.621MB/14.63MB 14:55:47 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:55:47 ed54a7dee1d8 Extracting [==================================================>] 1.196MB/1.196MB 14:55:47 eca0188f477e Extracting [================================================> ] 35.78MB/37.17MB 14:55:47 55f2b468da67 Extracting [====> ] 24.51MB/257.9MB 14:55:47 ed54a7dee1d8 Pull complete 14:55:47 fbe227156a9a Extracting [================> ] 4.915MB/14.63MB 14:55:47 55f2b468da67 Extracting [=====> ] 26.74MB/257.9MB 14:55:47 eca0188f477e Extracting [================================================> ] 36.18MB/37.17MB 14:55:47 12c5c803443f Extracting [==================================================>] 116B/116B 14:55:47 12c5c803443f Extracting [==================================================>] 116B/116B 14:55:47 eca0188f477e Extracting [==================================================>] 37.17MB/37.17MB 14:55:47 fbe227156a9a Extracting [===========================> ] 8.028MB/14.63MB 14:55:47 55f2b468da67 Extracting [=======> ] 37.32MB/257.9MB 14:55:47 eca0188f477e Pull complete 14:55:47 12c5c803443f Pull complete 14:55:47 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:55:47 e444bcd4d577 Extracting [==================================================>] 279B/279B 14:55:47 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:55:47 e27c75a98748 Extracting [==================================================>] 3.144kB/3.144kB 14:55:47 eabd8714fec9 Downloading [======> ] 52.44MB/375MB 14:55:47 45fd2fec8a19 Downloading [==================================================>] 1.103kB/1.103kB 14:55:47 45fd2fec8a19 Verifying Checksum 14:55:47 45fd2fec8a19 Download complete 14:55:47 8f10199ed94b Downloading [> ] 97.22kB/8.768MB 14:55:47 f963a77d2726 Downloading [=======> ] 3.01kB/21.44kB 14:55:47 f963a77d2726 Download complete 14:55:47 fbe227156a9a Extracting [============================> ] 8.356MB/14.63MB 14:55:47 55f2b468da67 Extracting [=======> ] 41.22MB/257.9MB 14:55:47 f3a82e9f1761 Downloading [> ] 457.7kB/44.41MB 14:55:47 eabd8714fec9 Downloading [========> ] 64.34MB/375MB 14:55:47 8f10199ed94b Downloading [============================> ] 5.012MB/8.768MB 14:55:47 fbe227156a9a Extracting [================================> ] 9.503MB/14.63MB 14:55:47 55f2b468da67 Extracting [=========> ] 48.46MB/257.9MB 14:55:47 e27c75a98748 Pull complete 14:55:47 e444bcd4d577 Pull complete 14:55:47 f3a82e9f1761 Downloading [===> ] 2.751MB/44.41MB 14:55:48 8f10199ed94b Verifying Checksum 14:55:48 8f10199ed94b Download complete 14:55:48 eabd8714fec9 Downloading [=========> ] 73.53MB/375MB 14:55:48 79161a3f5362 Downloading [================================> ] 3.011kB/4.656kB 14:55:48 79161a3f5362 Download complete 14:55:48 9c266ba63f51 Downloading [==================================================>] 1.105kB/1.105kB 14:55:48 9c266ba63f51 Verifying Checksum 14:55:48 9c266ba63f51 Download complete 14:55:48 fbe227156a9a Extracting [======================================> ] 11.3MB/14.63MB 14:55:48 55f2b468da67 Extracting [===========> ] 58.49MB/257.9MB 14:55:48 2e8a7df9c2ee Downloading [==================================================>] 851B/851B 14:55:48 2e8a7df9c2ee Verifying Checksum 14:55:48 2e8a7df9c2ee Download complete 14:55:48 e73cb4a42719 Extracting [> ] 557.1kB/109.1MB 14:55:48 10f05dd8b1db Downloading [==================================================>] 98B/98B 14:55:48 10f05dd8b1db Verifying Checksum 14:55:48 10f05dd8b1db Download complete 14:55:48 f3a82e9f1761 Downloading [===========> ] 10.09MB/44.41MB 14:55:48 41dac8b43ba6 Downloading [==================================================>] 171B/171B 14:55:48 41dac8b43ba6 Verifying Checksum 14:55:48 41dac8b43ba6 Download complete 14:55:48 eabd8714fec9 Downloading [===========> ] 84.34MB/375MB 14:55:48 71a9f6a9ab4d Downloading [> ] 3.009kB/230.6kB 14:55:48 71a9f6a9ab4d Downloading [==================================================>] 230.6kB/230.6kB 14:55:48 71a9f6a9ab4d Verifying Checksum 14:55:48 71a9f6a9ab4d Download complete 14:55:48 55f2b468da67 Extracting [=============> ] 67.96MB/257.9MB 14:55:48 fbe227156a9a Extracting [=========================================> ] 12.29MB/14.63MB 14:55:48 e73cb4a42719 Extracting [==> ] 4.456MB/109.1MB 14:55:48 f3a82e9f1761 Downloading [=======================> ] 20.64MB/44.41MB 14:55:48 fbe227156a9a Extracting [==================================================>] 14.63MB/14.63MB 14:55:48 eabd8714fec9 Downloading [=============> ] 99.48MB/375MB 14:55:48 da3ed5db7103 Downloading [> ] 539.6kB/127.4MB 14:55:48 f3a82e9f1761 Downloading [================================> ] 28.9MB/44.41MB 14:55:48 55f2b468da67 Extracting [===============> ] 77.99MB/257.9MB 14:55:48 eabd8714fec9 Downloading [=============> ] 102.7MB/375MB 14:55:48 da3ed5db7103 Downloading [> ] 1.08MB/127.4MB 14:55:48 e73cb4a42719 Extracting [===> ] 7.242MB/109.1MB 14:55:48 55f2b468da67 Extracting [===============> ] 79.66MB/257.9MB 14:55:48 f3a82e9f1761 Downloading [==================================> ] 30.28MB/44.41MB 14:55:48 eabd8714fec9 Downloading [=============> ] 104.3MB/375MB 14:55:48 fbe227156a9a Pull complete 14:55:48 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 14:55:48 b56567b07821 Extracting [==================================================>] 1.077kB/1.077kB 14:55:48 e73cb4a42719 Extracting [====> ] 10.03MB/109.1MB 14:55:48 f3a82e9f1761 Verifying Checksum 14:55:48 f3a82e9f1761 Download complete 14:55:48 55f2b468da67 Extracting [================> ] 87.46MB/257.9MB 14:55:48 eabd8714fec9 Downloading [===============> ] 119.5MB/375MB 14:55:48 b56567b07821 Pull complete 14:55:48 55f2b468da67 Extracting [==================> ] 96.37MB/257.9MB 14:55:48 e73cb4a42719 Extracting [======> ] 14.48MB/109.1MB 14:55:48 eabd8714fec9 Downloading [=================> ] 131.9MB/375MB 14:55:48 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 14:55:48 f243361b999b Extracting [==================================================>] 5.242kB/5.242kB 14:55:48 55f2b468da67 Extracting [====================> ] 103.6MB/257.9MB 14:55:48 e73cb4a42719 Extracting [========> ] 17.83MB/109.1MB 14:55:48 eabd8714fec9 Downloading [===================> ] 143.3MB/375MB 14:55:48 f243361b999b Pull complete 14:55:48 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 14:55:48 7abf0dc59d35 Extracting [==================================================>] 1.035kB/1.035kB 14:55:48 e73cb4a42719 Extracting [==========> ] 22.84MB/109.1MB 14:55:48 55f2b468da67 Extracting [=====================> ] 109.2MB/257.9MB 14:55:48 eabd8714fec9 Downloading [=====================> ] 160.6MB/375MB 14:55:49 55f2b468da67 Extracting [======================> ] 114.2MB/257.9MB 14:55:49 eabd8714fec9 Downloading [=======================> ] 175.7MB/375MB 14:55:49 7abf0dc59d35 Pull complete 14:55:49 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 14:55:49 991de477d40a Extracting [==================================================>] 1.035kB/1.035kB 14:55:49 e73cb4a42719 Extracting [============> ] 26.74MB/109.1MB 14:55:49 eabd8714fec9 Downloading [=======================> ] 178.4MB/375MB 14:55:49 55f2b468da67 Extracting [=======================> ] 119.2MB/257.9MB 14:55:49 da3ed5db7103 Downloading [> ] 1.621MB/127.4MB 14:55:49 c955f6e31a04 Downloading [===========================================> ] 3.011kB/3.446kB 14:55:49 c955f6e31a04 Downloading [==================================================>] 3.446kB/3.446kB 14:55:49 c955f6e31a04 Verifying Checksum 14:55:49 c955f6e31a04 Download complete 14:55:49 e73cb4a42719 Extracting [===============> ] 33.42MB/109.1MB 14:55:49 991de477d40a Pull complete 14:55:49 f18232174bc9 Downloading [> ] 48.06kB/3.642MB 14:55:49 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 14:55:49 5efc16ba9cdc Extracting [==================================================>] 19.52kB/19.52kB 14:55:49 eabd8714fec9 Downloading [=========================> ] 194.1MB/375MB 14:55:49 55f2b468da67 Extracting [========================> ] 124.2MB/257.9MB 14:55:49 da3ed5db7103 Downloading [=> ] 3.243MB/127.4MB 14:55:49 e73cb4a42719 Extracting [==================> ] 40.11MB/109.1MB 14:55:49 f18232174bc9 Verifying Checksum 14:55:49 f18232174bc9 Download complete 14:55:49 f18232174bc9 Extracting [> ] 65.54kB/3.642MB 14:55:49 65babbe3dfe5 Downloading [==================================================>] 141B/141B 14:55:49 65babbe3dfe5 Verifying Checksum 14:55:49 65babbe3dfe5 Download complete 14:55:49 651b0ba49b07 Downloading [> ] 48.06kB/3.524MB 14:55:49 eabd8714fec9 Downloading [============================> ] 210.9MB/375MB 14:55:49 55f2b468da67 Extracting [========================> ] 128.7MB/257.9MB 14:55:49 5efc16ba9cdc Pull complete 14:55:49 da3ed5db7103 Downloading [==> ] 5.406MB/127.4MB 14:55:49 e73cb4a42719 Extracting [====================> ] 44.01MB/109.1MB 14:55:49 f18232174bc9 Extracting [=====> ] 393.2kB/3.642MB 14:55:49 policy-db-migrator Pulled 14:55:49 651b0ba49b07 Downloading [================================================> ] 3.44MB/3.524MB 14:55:49 651b0ba49b07 Download complete 14:55:49 eabd8714fec9 Downloading [=============================> ] 223.3MB/375MB 14:55:49 55f2b468da67 Extracting [=========================> ] 132.6MB/257.9MB 14:55:49 da3ed5db7103 Downloading [==> ] 7.568MB/127.4MB 14:55:49 d953cde4314b Downloading [> ] 97.22kB/8.735MB 14:55:49 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:55:49 e73cb4a42719 Extracting [======================> ] 49.02MB/109.1MB 14:55:49 f18232174bc9 Extracting [==================================================>] 3.642MB/3.642MB 14:55:49 f18232174bc9 Pull complete 14:55:49 65babbe3dfe5 Extracting [==================================================>] 141B/141B 14:55:49 65babbe3dfe5 Extracting [==================================================>] 141B/141B 14:55:49 eabd8714fec9 Downloading [===============================> ] 236.8MB/375MB 14:55:49 55f2b468da67 Extracting [==========================> ] 137MB/257.9MB 14:55:49 d953cde4314b Downloading [==================> ] 3.145MB/8.735MB 14:55:49 da3ed5db7103 Downloading [====> ] 10.27MB/127.4MB 14:55:49 e73cb4a42719 Extracting [=======================> ] 51.81MB/109.1MB 14:55:49 eabd8714fec9 Downloading [=================================> ] 254.7MB/375MB 14:55:49 55f2b468da67 Extracting [===========================> ] 140.4MB/257.9MB 14:55:49 65babbe3dfe5 Pull complete 14:55:49 d953cde4314b Downloading [=========================================> ] 7.175MB/8.735MB 14:55:49 eabd8714fec9 Downloading [==================================> ] 260.6MB/375MB 14:55:49 d953cde4314b Downloading [=========================================> ] 7.273MB/8.735MB 14:55:49 da3ed5db7103 Downloading [=====> ] 12.98MB/127.4MB 14:55:49 651b0ba49b07 Extracting [> ] 65.54kB/3.524MB 14:55:49 d953cde4314b Verifying Checksum 14:55:49 d953cde4314b Download complete 14:55:49 e73cb4a42719 Extracting [========================> ] 53.48MB/109.1MB 14:55:49 aecd4cb03450 Downloading [==> ] 3.01kB/58.08kB 14:55:49 aecd4cb03450 Downloading [==================================================>] 58.08kB/58.08kB 14:55:49 aecd4cb03450 Download complete 14:55:49 55f2b468da67 Extracting [===========================> ] 142MB/257.9MB 14:55:49 13fa68ca8757 Downloading [=====> ] 3.01kB/27.77kB 14:55:49 13fa68ca8757 Downloading [==================================================>] 27.77kB/27.77kB 14:55:49 13fa68ca8757 Verifying Checksum 14:55:49 13fa68ca8757 Download complete 14:55:49 f836d47fdc4d Downloading [> ] 539.6kB/107.3MB 14:55:49 eabd8714fec9 Downloading [====================================> ] 274.7MB/375MB 14:55:49 da3ed5db7103 Downloading [=========> ] 23.25MB/127.4MB 14:55:49 651b0ba49b07 Extracting [====> ] 327.7kB/3.524MB 14:55:49 e73cb4a42719 Extracting [=========================> ] 55.71MB/109.1MB 14:55:49 55f2b468da67 Extracting [============================> ] 146.5MB/257.9MB 14:55:50 eabd8714fec9 Downloading [======================================> ] 290.3MB/375MB 14:55:50 651b0ba49b07 Extracting [=======================================> ] 2.818MB/3.524MB 14:55:50 da3ed5db7103 Downloading [=============> ] 35.14MB/127.4MB 14:55:50 f836d47fdc4d Downloading [=> ] 3.243MB/107.3MB 14:55:50 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 14:55:50 e73cb4a42719 Extracting [==========================> ] 58.49MB/109.1MB 14:55:50 651b0ba49b07 Extracting [==================================================>] 3.524MB/3.524MB 14:55:50 55f2b468da67 Extracting [=============================> ] 149.8MB/257.9MB 14:55:50 eabd8714fec9 Downloading [========================================> ] 304.9MB/375MB 14:55:50 da3ed5db7103 Downloading [===================> ] 49.2MB/127.4MB 14:55:50 f836d47fdc4d Downloading [=====> ] 11.35MB/107.3MB 14:55:50 651b0ba49b07 Pull complete 14:55:50 d953cde4314b Extracting [> ] 98.3kB/8.735MB 14:55:50 e73cb4a42719 Extracting [============================> ] 62.39MB/109.1MB 14:55:50 55f2b468da67 Extracting [=============================> ] 154.3MB/257.9MB 14:55:50 eabd8714fec9 Downloading [==========================================> ] 320.6MB/375MB 14:55:50 da3ed5db7103 Downloading [========================> ] 62.72MB/127.4MB 14:55:50 f836d47fdc4d Downloading [==========> ] 23.25MB/107.3MB 14:55:50 d953cde4314b Extracting [===> ] 589.8kB/8.735MB 14:55:50 e73cb4a42719 Extracting [===============================> ] 67.96MB/109.1MB 14:55:50 55f2b468da67 Extracting [==============================> ] 158.8MB/257.9MB 14:55:50 eabd8714fec9 Downloading [============================================> ] 336.3MB/375MB 14:55:50 da3ed5db7103 Downloading [==============================> ] 77.32MB/127.4MB 14:55:50 f836d47fdc4d Downloading [=================> ] 36.76MB/107.3MB 14:55:50 d953cde4314b Extracting [=========================> ] 4.424MB/8.735MB 14:55:50 55f2b468da67 Extracting [===============================> ] 164.9MB/257.9MB 14:55:50 e73cb4a42719 Extracting [=================================> ] 72.42MB/109.1MB 14:55:50 eabd8714fec9 Downloading [==============================================> ] 352MB/375MB 14:55:50 da3ed5db7103 Downloading [====================================> ] 92.99MB/127.4MB 14:55:50 f836d47fdc4d Downloading [=======================> ] 50.28MB/107.3MB 14:55:50 d953cde4314b Extracting [==========================================> ] 7.471MB/8.735MB 14:55:50 55f2b468da67 Extracting [================================> ] 168.8MB/257.9MB 14:55:50 e73cb4a42719 Extracting [==================================> ] 76.32MB/109.1MB 14:55:50 d953cde4314b Extracting [==================================================>] 8.735MB/8.735MB 14:55:50 eabd8714fec9 Downloading [================================================> ] 366.6MB/375MB 14:55:50 da3ed5db7103 Downloading [===========================================> ] 109.8MB/127.4MB 14:55:50 f836d47fdc4d Downloading [==============================> ] 64.88MB/107.3MB 14:55:50 eabd8714fec9 Verifying Checksum 14:55:50 eabd8714fec9 Download complete 14:55:50 55f2b468da67 Extracting [=================================> ] 170.5MB/257.9MB 14:55:50 e73cb4a42719 Extracting [====================================> ] 79.66MB/109.1MB 14:55:50 da3ed5db7103 Downloading [============================================> ] 114.6MB/127.4MB 14:55:50 f836d47fdc4d Downloading [===============================> ] 67.04MB/107.3MB 14:55:50 8b5292c940e1 Downloading [> ] 539.6kB/63.48MB 14:55:50 eabd8714fec9 Extracting [> ] 557.1kB/375MB 14:55:50 e73cb4a42719 Extracting [======================================> ] 83MB/109.1MB 14:55:50 da3ed5db7103 Verifying Checksum 14:55:50 da3ed5db7103 Download complete 14:55:50 55f2b468da67 Extracting [=================================> ] 172.7MB/257.9MB 14:55:50 f836d47fdc4d Downloading [=====================================> ] 80.56MB/107.3MB 14:55:50 454a4350d439 Downloading [============> ] 3.01kB/11.93kB 14:55:50 454a4350d439 Download complete 14:55:50 9a8c18aee5ea Downloading [==================================================>] 1.227kB/1.227kB 14:55:50 9a8c18aee5ea Verifying Checksum 14:55:50 9a8c18aee5ea Download complete 14:55:50 8b5292c940e1 Downloading [====> ] 5.406MB/63.48MB 14:55:50 9fa9226be034 Downloading [> ] 15.3kB/783kB 14:55:50 eabd8714fec9 Extracting [=> ] 10.58MB/375MB 14:55:50 9fa9226be034 Downloading [==================================================>] 783kB/783kB 14:55:50 9fa9226be034 Download complete 14:55:50 9fa9226be034 Extracting [==> ] 32.77kB/783kB 14:55:50 1617e25568b2 Downloading [=> ] 15.3kB/480.9kB 14:55:50 1617e25568b2 Downloading [==================================================>] 480.9kB/480.9kB 14:55:50 1617e25568b2 Verifying Checksum 14:55:50 1617e25568b2 Download complete 14:55:50 e73cb4a42719 Extracting [========================================> ] 88.01MB/109.1MB 14:55:50 f836d47fdc4d Downloading [=============================================> ] 97.86MB/107.3MB 14:55:50 55f2b468da67 Extracting [=================================> ] 173.8MB/257.9MB 14:55:50 6ac0e4adf315 Downloading [> ] 539.6kB/62.07MB 14:55:50 8b5292c940e1 Downloading [===========> ] 14.6MB/63.48MB 14:55:50 f836d47fdc4d Verifying Checksum 14:55:50 f836d47fdc4d Download complete 14:55:50 9fa9226be034 Extracting [=======================> ] 360.4kB/783kB 14:55:50 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:55:50 9fa9226be034 Extracting [==================================================>] 783kB/783kB 14:55:50 eabd8714fec9 Extracting [=> ] 13.93MB/375MB 14:55:50 f3b09c502777 Downloading [> ] 539.6kB/56.52MB 14:55:50 6ac0e4adf315 Downloading [===> ] 4.324MB/62.07MB 14:55:50 8b5292c940e1 Downloading [===================> ] 24.87MB/63.48MB 14:55:50 e73cb4a42719 Extracting [==========================================> ] 91.91MB/109.1MB 14:55:51 eabd8714fec9 Extracting [==> ] 16.71MB/375MB 14:55:51 55f2b468da67 Extracting [==================================> ] 175.5MB/257.9MB 14:55:51 f3b09c502777 Downloading [========> ] 9.19MB/56.52MB 14:55:51 6ac0e4adf315 Downloading [============> ] 15.14MB/62.07MB 14:55:51 8b5292c940e1 Downloading [=============================> ] 37.31MB/63.48MB 14:55:51 e73cb4a42719 Extracting [==========================================> ] 93.59MB/109.1MB 14:55:51 f3b09c502777 Downloading [================> ] 18.92MB/56.52MB 14:55:51 8b5292c940e1 Downloading [==================================> ] 43.79MB/63.48MB 14:55:51 55f2b468da67 Extracting [==================================> ] 177.1MB/257.9MB 14:55:51 6ac0e4adf315 Downloading [=================> ] 22.17MB/62.07MB 14:55:51 eabd8714fec9 Extracting [==> ] 21.73MB/375MB 14:55:51 e73cb4a42719 Extracting [===========================================> ] 94.14MB/109.1MB 14:55:51 d953cde4314b Pull complete 14:55:51 f3b09c502777 Downloading [============================> ] 31.9MB/56.52MB 14:55:51 8b5292c940e1 Downloading [=======================================> ] 49.74MB/63.48MB 14:55:51 eabd8714fec9 Extracting [===> ] 23.4MB/375MB 14:55:51 55f2b468da67 Extracting [==================================> ] 178.8MB/257.9MB 14:55:51 6ac0e4adf315 Downloading [=============================> ] 36.22MB/62.07MB 14:55:51 e73cb4a42719 Extracting [===========================================> ] 95.81MB/109.1MB 14:55:51 9fa9226be034 Pull complete 14:55:51 6ac0e4adf315 Downloading [===============================> ] 39.47MB/62.07MB 14:55:51 aecd4cb03450 Extracting [============================> ] 32.77kB/58.08kB 14:55:51 aecd4cb03450 Extracting [==================================================>] 58.08kB/58.08kB 14:55:51 8b5292c940e1 Downloading [============================================> ] 56.23MB/63.48MB 14:55:51 e73cb4a42719 Extracting [============================================> ] 96.37MB/109.1MB 14:55:51 55f2b468da67 Extracting [==================================> ] 179.9MB/257.9MB 14:55:51 1617e25568b2 Extracting [===> ] 32.77kB/480.9kB 14:55:51 eabd8714fec9 Extracting [===> ] 23.95MB/375MB 14:55:51 6ac0e4adf315 Downloading [=====================================> ] 47.04MB/62.07MB 14:55:51 8b5292c940e1 Downloading [===============================================> ] 60.01MB/63.48MB 14:55:51 8b5292c940e1 Verifying Checksum 14:55:51 8b5292c940e1 Download complete 14:55:51 e73cb4a42719 Extracting [============================================> ] 98.04MB/109.1MB 14:55:51 55f2b468da67 Extracting [===================================> ] 183.3MB/257.9MB 14:55:51 1617e25568b2 Extracting [==================================> ] 327.7kB/480.9kB 14:55:51 eabd8714fec9 Extracting [====> ] 31.2MB/375MB 14:55:51 6ac0e4adf315 Downloading [=======================================> ] 49.2MB/62.07MB 14:55:51 f3b09c502777 Downloading [================================> ] 36.76MB/56.52MB 14:55:51 e73cb4a42719 Extracting [=============================================> ] 99.71MB/109.1MB 14:55:51 408012a7b118 Downloading [==================================================>] 637B/637B 14:55:51 408012a7b118 Verifying Checksum 14:55:51 408012a7b118 Download complete 14:55:51 55f2b468da67 Extracting [===================================> ] 184.9MB/257.9MB 14:55:51 eabd8714fec9 Extracting [=====> ] 37.88MB/375MB 14:55:51 1617e25568b2 Extracting [===============================================> ] 458.8kB/480.9kB 14:55:51 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:55:51 1617e25568b2 Extracting [==================================================>] 480.9kB/480.9kB 14:55:51 e73cb4a42719 Extracting [==============================================> ] 101.4MB/109.1MB 14:55:51 aecd4cb03450 Pull complete 14:55:51 55f2b468da67 Extracting [====================================> ] 188.3MB/257.9MB 14:55:51 eabd8714fec9 Extracting [======> ] 45.12MB/375MB 14:55:51 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 14:55:51 13fa68ca8757 Extracting [==================================================>] 27.77kB/27.77kB 14:55:51 e73cb4a42719 Extracting [===============================================> ] 103.6MB/109.1MB 14:55:51 55f2b468da67 Extracting [=====================================> ] 192.2MB/257.9MB 14:55:51 eabd8714fec9 Extracting [=======> ] 53.48MB/375MB 14:55:52 e73cb4a42719 Extracting [================================================> ] 105.3MB/109.1MB 14:55:52 eabd8714fec9 Extracting [========> ] 62.95MB/375MB 14:55:52 55f2b468da67 Extracting [=====================================> ] 195MB/257.9MB 14:55:52 eabd8714fec9 Extracting [=========> ] 71.3MB/375MB 14:55:52 55f2b468da67 Extracting [======================================> ] 196.1MB/257.9MB 14:55:52 eabd8714fec9 Extracting [==========> ] 77.99MB/375MB 14:55:52 eabd8714fec9 Extracting [==========> ] 81.89MB/375MB 14:55:52 eabd8714fec9 Extracting [===========> ] 83MB/375MB 14:55:52 e73cb4a42719 Extracting [=================================================> ] 107.5MB/109.1MB 14:55:52 55f2b468da67 Extracting [======================================> ] 196.6MB/257.9MB 14:55:52 eabd8714fec9 Extracting [===========> ] 86.9MB/375MB 14:55:52 eabd8714fec9 Extracting [============> ] 93.03MB/375MB 14:55:52 55f2b468da67 Extracting [======================================> ] 197.8MB/257.9MB 14:55:52 e73cb4a42719 Extracting [=================================================> ] 108.6MB/109.1MB 14:55:52 13fa68ca8757 Pull complete 14:55:52 e73cb4a42719 Extracting [==================================================>] 109.1MB/109.1MB 14:55:52 1617e25568b2 Pull complete 14:55:53 eabd8714fec9 Extracting [=============> ] 98.6MB/375MB 14:55:53 55f2b468da67 Extracting [======================================> ] 200MB/257.9MB 14:55:53 eabd8714fec9 Extracting [=============> ] 104.2MB/375MB 14:55:53 f836d47fdc4d Extracting [> ] 557.1kB/107.3MB 14:55:53 55f2b468da67 Extracting [=======================================> ] 201.7MB/257.9MB 14:55:53 eabd8714fec9 Extracting [==============> ] 108.6MB/375MB 14:55:53 f836d47fdc4d Extracting [=> ] 3.899MB/107.3MB 14:55:53 55f2b468da67 Extracting [=======================================> ] 203.9MB/257.9MB 14:55:53 f836d47fdc4d Extracting [===> ] 7.242MB/107.3MB 14:55:53 eabd8714fec9 Extracting [===============> ] 113.1MB/375MB 14:55:53 55f2b468da67 Extracting [=======================================> ] 206.1MB/257.9MB 14:55:53 eabd8714fec9 Extracting [===============> ] 117.5MB/375MB 14:55:53 f836d47fdc4d Extracting [====> ] 10.58MB/107.3MB 14:55:53 55f2b468da67 Extracting [========================================> ] 207.8MB/257.9MB 14:55:53 eabd8714fec9 Extracting [================> ] 121.4MB/375MB 14:55:53 f836d47fdc4d Extracting [=======> ] 15.04MB/107.3MB 14:55:53 55f2b468da67 Extracting [========================================> ] 211.1MB/257.9MB 14:55:53 e73cb4a42719 Pull complete 14:55:53 eabd8714fec9 Extracting [================> ] 124.8MB/375MB 14:55:53 f836d47fdc4d Extracting [=======> ] 16.71MB/107.3MB 14:55:53 55f2b468da67 Extracting [=========================================> ] 212.2MB/257.9MB 14:55:53 eabd8714fec9 Extracting [=================> ] 128.7MB/375MB 14:55:53 55f2b468da67 Extracting [=========================================> ] 215.6MB/257.9MB 14:55:53 f836d47fdc4d Extracting [========> ] 17.83MB/107.3MB 14:55:53 eabd8714fec9 Extracting [=================> ] 132.6MB/375MB 14:55:53 55f2b468da67 Extracting [==========================================> ] 220.6MB/257.9MB 14:55:53 f836d47fdc4d Extracting [==========> ] 21.73MB/107.3MB 14:55:54 eabd8714fec9 Extracting [==================> ] 138.1MB/375MB 14:55:54 55f2b468da67 Extracting [===========================================> ] 223.9MB/257.9MB 14:55:54 f836d47fdc4d Extracting [============> ] 27.85MB/107.3MB 14:55:54 eabd8714fec9 Extracting [===================> ] 142.6MB/375MB 14:55:54 f836d47fdc4d Extracting [===============> ] 33.98MB/107.3MB 14:55:54 55f2b468da67 Extracting [===========================================> ] 226.7MB/257.9MB 14:55:54 eabd8714fec9 Extracting [===================> ] 147.1MB/375MB 14:55:54 f836d47fdc4d Extracting [=================> ] 37.88MB/107.3MB 14:55:54 55f2b468da67 Extracting [============================================> ] 228.4MB/257.9MB 14:55:54 eabd8714fec9 Extracting [====================> ] 152.1MB/375MB 14:55:54 f836d47fdc4d Extracting [===================> ] 41.78MB/107.3MB 14:55:54 55f2b468da67 Extracting [============================================> ] 230.6MB/257.9MB 14:55:54 eabd8714fec9 Extracting [====================> ] 156MB/375MB 14:55:54 6ac0e4adf315 Downloading [===========================================> ] 53.53MB/62.07MB 14:55:54 f3b09c502777 Downloading [===================================> ] 40.01MB/56.52MB 14:55:54 f836d47fdc4d Extracting [======================> ] 47.35MB/107.3MB 14:55:54 44986281b8b9 Downloading [=====================================> ] 3.011kB/4.022kB 14:55:54 44986281b8b9 Downloading [==================================================>] 4.022kB/4.022kB 14:55:54 44986281b8b9 Verifying Checksum 14:55:54 44986281b8b9 Download complete 14:55:54 55f2b468da67 Extracting [=============================================> ] 232.3MB/257.9MB 14:55:54 bf70c5107ab5 Download complete 14:55:54 6ac0e4adf315 Download complete 14:55:54 7221d93db8a9 Downloading [==================================================>] 100B/100B 14:55:54 7221d93db8a9 Verifying Checksum 14:55:54 7221d93db8a9 Download complete 14:55:54 1ccde423731d Downloading [==> ] 3.01kB/61.44kB 14:55:54 eabd8714fec9 Extracting [=====================> ] 159.9MB/375MB 14:55:54 1ccde423731d Verifying Checksum 14:55:54 1ccde423731d Download complete 14:55:54 7df673c7455d Downloading [==================================================>] 694B/694B 14:55:54 7df673c7455d Verifying Checksum 14:55:54 7df673c7455d Download complete 14:55:54 f3b09c502777 Downloading [============================================> ] 49.74MB/56.52MB 14:55:54 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:55:54 55f2b468da67 Extracting [=============================================> ] 232.8MB/257.9MB 14:55:54 a83b68436f09 Extracting [==================================================>] 9.919kB/9.919kB 14:55:54 f836d47fdc4d Extracting [=======================> ] 49.58MB/107.3MB 14:55:54 6ac0e4adf315 Extracting [> ] 557.1kB/62.07MB 14:55:54 eabd8714fec9 Extracting [=====================> ] 161MB/375MB 14:55:54 f3b09c502777 Verifying Checksum 14:55:54 f836d47fdc4d Extracting [========================> ] 53.48MB/107.3MB 14:55:54 55f2b468da67 Extracting [=============================================> ] 235.6MB/257.9MB 14:55:54 6ac0e4adf315 Extracting [===> ] 3.899MB/62.07MB 14:55:54 eabd8714fec9 Extracting [=====================> ] 163.8MB/375MB 14:55:54 f836d47fdc4d Extracting [===========================> ] 57.93MB/107.3MB 14:55:54 55f2b468da67 Extracting [==============================================> ] 238.4MB/257.9MB 14:55:54 6ac0e4adf315 Extracting [====> ] 6.128MB/62.07MB 14:55:54 eabd8714fec9 Extracting [======================> ] 167.7MB/375MB 14:55:54 f836d47fdc4d Extracting [=============================> ] 63.5MB/107.3MB 14:55:54 6ac0e4adf315 Extracting [=======> ] 9.47MB/62.07MB 14:55:54 eabd8714fec9 Extracting [=======================> ] 174.4MB/375MB 14:55:55 f836d47fdc4d Extracting [===============================> ] 66.85MB/107.3MB 14:55:55 eabd8714fec9 Extracting [========================> ] 182.2MB/375MB 14:55:55 6ac0e4adf315 Extracting [=========> ] 12.26MB/62.07MB 14:55:55 f836d47fdc4d Extracting [===============================> ] 67.4MB/107.3MB 14:55:55 55f2b468da67 Extracting [===============================================> ] 244.5MB/257.9MB 14:55:55 eabd8714fec9 Extracting [=========================> ] 188.8MB/375MB 14:55:55 6ac0e4adf315 Extracting [============> ] 15.6MB/62.07MB 14:55:55 f836d47fdc4d Extracting [================================> ] 70.75MB/107.3MB 14:55:55 55f2b468da67 Extracting [================================================> ] 252.3MB/257.9MB 14:55:55 eabd8714fec9 Extracting [==========================> ] 197.8MB/375MB 14:55:55 6ac0e4adf315 Extracting [=================> ] 22.28MB/62.07MB 14:55:55 f836d47fdc4d Extracting [==================================> ] 74.09MB/107.3MB 14:55:55 55f2b468da67 Extracting [=================================================> ] 255.7MB/257.9MB 14:55:55 eabd8714fec9 Extracting [===========================> ] 204.4MB/375MB 14:55:55 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:55:55 55f2b468da67 Extracting [==================================================>] 257.9MB/257.9MB 14:55:55 6ac0e4adf315 Extracting [===================> ] 23.95MB/62.07MB 14:55:55 f836d47fdc4d Extracting [===================================> ] 76.87MB/107.3MB 14:55:55 eabd8714fec9 Extracting [============================> ] 216.7MB/375MB 14:55:55 f836d47fdc4d Extracting [=====================================> ] 80.22MB/107.3MB 14:55:55 6ac0e4adf315 Extracting [=====================> ] 26.74MB/62.07MB 14:55:56 eabd8714fec9 Extracting [=============================> ] 220.6MB/375MB 14:55:56 f836d47fdc4d Extracting [=======================================> ] 85.23MB/107.3MB 14:55:56 6ac0e4adf315 Extracting [========================> ] 30.64MB/62.07MB 14:55:56 eabd8714fec9 Extracting [==============================> ] 225.6MB/375MB 14:55:56 f836d47fdc4d Extracting [===========================================> ] 93.59MB/107.3MB 14:55:56 6ac0e4adf315 Extracting [============================> ] 35.09MB/62.07MB 14:55:56 eabd8714fec9 Extracting [==============================> ] 232.3MB/375MB 14:55:56 f836d47fdc4d Extracting [=============================================> ] 98.6MB/107.3MB 14:55:56 6ac0e4adf315 Extracting [========================================> ] 50.14MB/62.07MB 14:55:56 eabd8714fec9 Extracting [===============================> ] 238.4MB/375MB 14:55:56 6ac0e4adf315 Extracting [================================================> ] 60.16MB/62.07MB 14:55:56 f836d47fdc4d Extracting [================================================> ] 103.1MB/107.3MB 14:55:56 eabd8714fec9 Extracting [================================> ] 244MB/375MB 14:55:56 6ac0e4adf315 Extracting [==================================================>] 62.07MB/62.07MB 14:55:56 f836d47fdc4d Extracting [================================================> ] 104.2MB/107.3MB 14:55:56 eabd8714fec9 Extracting [================================> ] 245.7MB/375MB 14:55:57 f836d47fdc4d Extracting [=================================================> ] 105.8MB/107.3MB 14:55:57 eabd8714fec9 Extracting [=================================> ] 251.2MB/375MB 14:55:57 f836d47fdc4d Extracting [==================================================>] 107.3MB/107.3MB 14:55:57 eabd8714fec9 Extracting [==================================> ] 255.7MB/375MB 14:55:57 eabd8714fec9 Extracting [===================================> ] 263.5MB/375MB 14:55:57 eabd8714fec9 Extracting [===================================> ] 268.5MB/375MB 14:55:57 eabd8714fec9 Extracting [===================================> ] 269.6MB/375MB 14:55:57 eabd8714fec9 Extracting [====================================> ] 271.3MB/375MB 14:55:58 eabd8714fec9 Extracting [====================================> ] 273MB/375MB 14:55:58 a83b68436f09 Pull complete 14:55:58 eabd8714fec9 Extracting [====================================> ] 275.2MB/375MB 14:55:58 eabd8714fec9 Extracting [=====================================> ] 280.8MB/375MB 14:55:58 eabd8714fec9 Extracting [======================================> ] 288MB/375MB 14:55:58 eabd8714fec9 Extracting [=======================================> ] 293MB/375MB 14:55:58 eabd8714fec9 Extracting [=======================================> ] 295.8MB/375MB 14:55:58 55f2b468da67 Pull complete 14:55:59 eabd8714fec9 Extracting [=======================================> ] 296.4MB/375MB 14:55:59 eabd8714fec9 Extracting [=======================================> ] 299.7MB/375MB 14:55:59 eabd8714fec9 Extracting [========================================> ] 303MB/375MB 14:55:59 eabd8714fec9 Extracting [========================================> ] 305.8MB/375MB 14:55:59 eabd8714fec9 Extracting [=========================================> ] 307.5MB/375MB 14:55:59 6ac0e4adf315 Pull complete 14:55:59 f836d47fdc4d Pull complete 14:55:59 787d6bee9571 Extracting [==================================================>] 127B/127B 14:55:59 787d6bee9571 Extracting [==================================================>] 127B/127B 14:56:01 eabd8714fec9 Extracting [=========================================> ] 310.8MB/375MB 14:56:01 eabd8714fec9 Extracting [=========================================> ] 312MB/375MB 14:56:01 eabd8714fec9 Extracting [=========================================> ] 312.5MB/375MB 14:56:01 f3b09c502777 Extracting [> ] 557.1kB/56.52MB 14:56:01 eabd8714fec9 Extracting [=========================================> ] 314.7MB/375MB 14:56:01 f3b09c502777 Extracting [=====> ] 6.128MB/56.52MB 14:56:01 f3b09c502777 Extracting [========> ] 10.03MB/56.52MB 14:56:01 eabd8714fec9 Extracting [==========================================> ] 319.8MB/375MB 14:56:01 f3b09c502777 Extracting [=========> ] 11.14MB/56.52MB 14:56:01 eabd8714fec9 Extracting [===========================================> ] 323.6MB/375MB 14:56:01 f3b09c502777 Extracting [============> ] 14.48MB/56.52MB 14:56:02 787d6bee9571 Pull complete 14:56:02 82bfc142787e Extracting [> ] 98.3kB/8.613MB 14:56:02 eabd8714fec9 Extracting [===========================================> ] 327MB/375MB 14:56:02 f3b09c502777 Extracting [=============> ] 15.6MB/56.52MB 14:56:02 8b5292c940e1 Extracting [> ] 557.1kB/63.48MB 14:56:02 82bfc142787e Extracting [==> ] 491.5kB/8.613MB 14:56:02 eabd8714fec9 Extracting [===========================================> ] 328.7MB/375MB 14:56:02 f3b09c502777 Extracting [================> ] 18.94MB/56.52MB 14:56:02 82bfc142787e Extracting [==================> ] 3.244MB/8.613MB 14:56:02 8b5292c940e1 Extracting [> ] 1.114MB/63.48MB 14:56:02 f3b09c502777 Extracting [===================> ] 21.73MB/56.52MB 14:56:02 eabd8714fec9 Extracting [===========================================> ] 329.2MB/375MB 14:56:02 13ff0988aaea Extracting [==================================================>] 167B/167B 14:56:02 82bfc142787e Extracting [=============================================> ] 7.864MB/8.613MB 14:56:02 13ff0988aaea Extracting [==================================================>] 167B/167B 14:56:02 f3b09c502777 Extracting [=======================> ] 26.18MB/56.52MB 14:56:02 82bfc142787e Extracting [==================================================>] 8.613MB/8.613MB 14:56:02 8b5292c940e1 Extracting [=> ] 1.671MB/63.48MB 14:56:02 eabd8714fec9 Extracting [============================================> ] 331.4MB/375MB 14:56:02 f3b09c502777 Extracting [=========================> ] 28.41MB/56.52MB 14:56:02 8b5292c940e1 Extracting [=> ] 2.228MB/63.48MB 14:56:02 eabd8714fec9 Extracting [============================================> ] 332.6MB/375MB 14:56:02 f3b09c502777 Extracting [==================================> ] 39.55MB/56.52MB 14:56:02 8b5292c940e1 Extracting [==> ] 2.785MB/63.48MB 14:56:02 eabd8714fec9 Extracting [============================================> ] 335.3MB/375MB 14:56:02 f3b09c502777 Extracting [===========================================> ] 49.58MB/56.52MB 14:56:02 8b5292c940e1 Extracting [===> ] 4.456MB/63.48MB 14:56:02 eabd8714fec9 Extracting [=============================================> ] 339.8MB/375MB 14:56:02 f3b09c502777 Extracting [=================================================> ] 55.71MB/56.52MB 14:56:03 8b5292c940e1 Extracting [===> ] 5.014MB/63.48MB 14:56:03 eabd8714fec9 Extracting [=============================================> ] 340.9MB/375MB 14:56:03 f3b09c502777 Extracting [==================================================>] 56.52MB/56.52MB 14:56:03 13ff0988aaea Pull complete 14:56:03 82bfc142787e Pull complete 14:56:03 8b5292c940e1 Extracting [======> ] 7.799MB/63.48MB 14:56:03 8b5292c940e1 Extracting [=======> ] 9.47MB/63.48MB 14:56:03 eabd8714fec9 Extracting [=============================================> ] 342MB/375MB 14:56:03 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:56:03 4b82842ab819 Extracting [==================================================>] 5.415kB/5.415kB 14:56:03 8b5292c940e1 Extracting [========> ] 10.58MB/63.48MB 14:56:03 8b5292c940e1 Extracting [========> ] 11.14MB/63.48MB 14:56:03 f3b09c502777 Pull complete 14:56:03 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:56:03 46baca71a4ef Extracting [==================================================>] 18.11kB/18.11kB 14:56:03 408012a7b118 Extracting [==================================================>] 637B/637B 14:56:03 408012a7b118 Extracting [==================================================>] 637B/637B 14:56:03 4b82842ab819 Pull complete 14:56:04 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:56:04 7e568a0dc8fb Extracting [==================================================>] 184B/184B 14:56:04 46baca71a4ef Pull complete 14:56:04 8b5292c940e1 Extracting [==========> ] 12.81MB/63.48MB 14:56:04 eabd8714fec9 Extracting [=============================================> ] 342.6MB/375MB 14:56:04 b0e0ef7895f4 Extracting [> ] 393.2kB/37.01MB 14:56:04 408012a7b118 Pull complete 14:56:04 8b5292c940e1 Extracting [===========> ] 15.04MB/63.48MB 14:56:04 b0e0ef7895f4 Extracting [======> ] 5.112MB/37.01MB 14:56:04 eabd8714fec9 Extracting [=============================================> ] 343.1MB/375MB 14:56:04 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:56:04 44986281b8b9 Extracting [==================================================>] 4.022kB/4.022kB 14:56:04 b0e0ef7895f4 Extracting [===================> ] 14.16MB/37.01MB 14:56:04 8b5292c940e1 Extracting [=============> ] 16.71MB/63.48MB 14:56:04 eabd8714fec9 Extracting [==============================================> ] 345.9MB/375MB 14:56:04 8b5292c940e1 Extracting [==============> ] 17.83MB/63.48MB 14:56:04 b0e0ef7895f4 Extracting [===========================> ] 20.05MB/37.01MB 14:56:04 7e568a0dc8fb Pull complete 14:56:04 8b5292c940e1 Extracting [================> ] 20.61MB/63.48MB 14:56:04 b0e0ef7895f4 Extracting [================================> ] 23.99MB/37.01MB 14:56:04 eabd8714fec9 Extracting [==============================================> ] 349.8MB/375MB 14:56:05 b0e0ef7895f4 Extracting [=================================> ] 25.17MB/37.01MB 14:56:05 8b5292c940e1 Extracting [================> ] 21.17MB/63.48MB 14:56:05 eabd8714fec9 Extracting [==============================================> ] 350.4MB/375MB 14:56:05 b0e0ef7895f4 Extracting [==================================================>] 37.01MB/37.01MB 14:56:05 8b5292c940e1 Extracting [==================> ] 23.4MB/63.48MB 14:56:05 eabd8714fec9 Extracting [===============================================> ] 354.8MB/375MB 14:56:05 8b5292c940e1 Extracting [=====================> ] 26.74MB/63.48MB 14:56:05 eabd8714fec9 Extracting [===============================================> ] 357.1MB/375MB 14:56:05 8b5292c940e1 Extracting [=======================> ] 29.52MB/63.48MB 14:56:05 eabd8714fec9 Extracting [================================================> ] 363.2MB/375MB 14:56:05 8b5292c940e1 Extracting [=========================> ] 32.31MB/63.48MB 14:56:05 eabd8714fec9 Extracting [=================================================> ] 368.2MB/375MB 14:56:05 8b5292c940e1 Extracting [============================> ] 36.21MB/63.48MB 14:56:05 eabd8714fec9 Extracting [=================================================> ] 373.8MB/375MB 14:56:05 eabd8714fec9 Extracting [==================================================>] 375MB/375MB 14:56:05 8b5292c940e1 Extracting [==============================> ] 38.99MB/63.48MB 14:56:06 44986281b8b9 Pull complete 14:56:06 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:56:06 bf70c5107ab5 Extracting [==================================================>] 1.44kB/1.44kB 14:56:06 b0e0ef7895f4 Pull complete 14:56:06 postgres Pulled 14:56:06 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:56:06 c0c90eeb8aca Extracting [==================================================>] 1.105kB/1.105kB 14:56:06 8b5292c940e1 Extracting [================================> ] 41.78MB/63.48MB 14:56:06 c0c90eeb8aca Pull complete 14:56:06 eabd8714fec9 Pull complete 14:56:06 bf70c5107ab5 Pull complete 14:56:06 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:56:06 5cfb27c10ea5 Extracting [==================================================>] 852B/852B 14:56:06 1ccde423731d Extracting [==========================> ] 32.77kB/61.44kB 14:56:06 1ccde423731d Extracting [==================================================>] 61.44kB/61.44kB 14:56:06 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:56:06 45fd2fec8a19 Extracting [==================================================>] 1.103kB/1.103kB 14:56:06 8b5292c940e1 Extracting [===================================> ] 45.68MB/63.48MB 14:56:06 1ccde423731d Pull complete 14:56:06 45fd2fec8a19 Pull complete 14:56:06 5cfb27c10ea5 Pull complete 14:56:06 8b5292c940e1 Extracting [======================================> ] 49.02MB/63.48MB 14:56:06 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:56:06 7221d93db8a9 Extracting [==================================================>] 100B/100B 14:56:06 8f10199ed94b Extracting [> ] 98.3kB/8.768MB 14:56:06 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:56:06 40a5eed61bb0 Extracting [==================================================>] 98B/98B 14:56:06 8f10199ed94b Extracting [=============> ] 2.359MB/8.768MB 14:56:06 8b5292c940e1 Extracting [========================================> ] 51.81MB/63.48MB 14:56:06 7221d93db8a9 Pull complete 14:56:06 40a5eed61bb0 Pull complete 14:56:06 7df673c7455d Extracting [==================================================>] 694B/694B 14:56:06 7df673c7455d Extracting [==================================================>] 694B/694B 14:56:06 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:56:06 e040ea11fa10 Extracting [==================================================>] 173B/173B 14:56:06 8f10199ed94b Extracting [==================================================>] 8.768MB/8.768MB 14:56:06 8b5292c940e1 Extracting [===========================================> ] 55.15MB/63.48MB 14:56:06 8f10199ed94b Pull complete 14:56:06 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:56:06 f963a77d2726 Extracting [==================================================>] 21.44kB/21.44kB 14:56:06 e040ea11fa10 Pull complete 14:56:06 7df673c7455d Pull complete 14:56:06 8b5292c940e1 Extracting [==============================================> ] 59.05MB/63.48MB 14:56:06 prometheus Pulled 14:56:06 f963a77d2726 Pull complete 14:56:06 09d5a3f70313 Extracting [> ] 557.1kB/109.2MB 14:56:06 8b5292c940e1 Extracting [================================================> ] 61.28MB/63.48MB 14:56:06 f3a82e9f1761 Extracting [> ] 458.8kB/44.41MB 14:56:06 09d5a3f70313 Extracting [====> ] 10.58MB/109.2MB 14:56:06 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 14:56:06 8b5292c940e1 Extracting [==================================================>] 63.48MB/63.48MB 14:56:06 f3a82e9f1761 Extracting [=============> ] 11.93MB/44.41MB 14:56:06 09d5a3f70313 Extracting [=========> ] 21.17MB/109.2MB 14:56:07 f3a82e9f1761 Extracting [========================> ] 22.02MB/44.41MB 14:56:07 8b5292c940e1 Pull complete 14:56:07 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 14:56:07 454a4350d439 Extracting [==================================================>] 11.93kB/11.93kB 14:56:07 09d5a3f70313 Extracting [===============> ] 33.42MB/109.2MB 14:56:07 f3a82e9f1761 Extracting [=========================================> ] 36.7MB/44.41MB 14:56:07 454a4350d439 Pull complete 14:56:07 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 14:56:07 9a8c18aee5ea Extracting [==================================================>] 1.227kB/1.227kB 14:56:07 09d5a3f70313 Extracting [======================> ] 48.46MB/109.2MB 14:56:07 f3a82e9f1761 Extracting [==================================================>] 44.41MB/44.41MB 14:56:07 f3a82e9f1761 Pull complete 14:56:07 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:56:07 79161a3f5362 Extracting [==================================================>] 4.656kB/4.656kB 14:56:07 09d5a3f70313 Extracting [===========================> ] 60.72MB/109.2MB 14:56:07 9a8c18aee5ea Pull complete 14:56:07 grafana Pulled 14:56:07 79161a3f5362 Pull complete 14:56:07 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:56:07 9c266ba63f51 Extracting [==================================================>] 1.105kB/1.105kB 14:56:07 09d5a3f70313 Extracting [===================================> ] 77.43MB/109.2MB 14:56:07 09d5a3f70313 Extracting [===========================================> ] 94.7MB/109.2MB 14:56:07 9c266ba63f51 Pull complete 14:56:07 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:56:07 2e8a7df9c2ee Extracting [==================================================>] 851B/851B 14:56:07 09d5a3f70313 Extracting [================================================> ] 105.3MB/109.2MB 14:56:07 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:56:07 09d5a3f70313 Extracting [==================================================>] 109.2MB/109.2MB 14:56:13 09d5a3f70313 Pull complete 14:56:13 2e8a7df9c2ee Pull complete 14:56:14 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:56:14 356f5c2c843b Extracting [==================================================>] 3.623kB/3.623kB 14:56:14 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:56:14 10f05dd8b1db Extracting [==================================================>] 98B/98B 14:56:14 356f5c2c843b Pull complete 14:56:14 10f05dd8b1db Pull complete 14:56:14 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:56:14 41dac8b43ba6 Extracting [==================================================>] 171B/171B 14:56:14 kafka Pulled 14:56:14 41dac8b43ba6 Pull complete 14:56:14 71a9f6a9ab4d Extracting [=======> ] 32.77kB/230.6kB 14:56:14 71a9f6a9ab4d Extracting [==================================================>] 230.6kB/230.6kB 14:56:14 71a9f6a9ab4d Pull complete 14:56:14 da3ed5db7103 Extracting [> ] 557.1kB/127.4MB 14:56:14 da3ed5db7103 Extracting [=====> ] 12.81MB/127.4MB 14:56:14 da3ed5db7103 Extracting [===========> ] 28.41MB/127.4MB 14:56:14 da3ed5db7103 Extracting [=================> ] 44.01MB/127.4MB 14:56:14 da3ed5db7103 Extracting [========================> ] 62.95MB/127.4MB 14:56:14 da3ed5db7103 Extracting [================================> ] 82.44MB/127.4MB 14:56:15 da3ed5db7103 Extracting [=======================================> ] 101.4MB/127.4MB 14:56:15 da3ed5db7103 Extracting [==============================================> ] 119.2MB/127.4MB 14:56:15 da3ed5db7103 Extracting [================================================> ] 124.8MB/127.4MB 14:56:15 da3ed5db7103 Extracting [==================================================>] 127.4MB/127.4MB 14:56:15 da3ed5db7103 Pull complete 14:56:15 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:56:15 c955f6e31a04 Extracting [==================================================>] 3.446kB/3.446kB 14:56:15 c955f6e31a04 Pull complete 14:56:15 zookeeper Pulled 14:56:15 Network compose_default Creating 14:56:15 Network compose_default Created 14:56:15 Container prometheus Creating 14:56:15 Container zookeeper Creating 14:56:15 Container postgres Creating 14:56:42 Container zookeeper Created 14:56:42 Container kafka Creating 14:56:42 Container prometheus Created 14:56:42 Container grafana Creating 14:56:42 Container postgres Created 14:56:42 Container policy-db-migrator Creating 14:56:42 Container policy-db-migrator Created 14:56:42 Container policy-api Creating 14:56:42 Container grafana Created 14:56:42 Container kafka Created 14:56:42 Container policy-api Created 14:56:42 Container policy-pap Creating 14:56:42 Container policy-pap Created 14:56:42 Container policy-opa-pdp Creating 14:56:42 Container policy-opa-pdp Created 14:56:42 Container zookeeper Starting 14:56:42 Container postgres Starting 14:56:42 Container prometheus Starting 14:56:44 Container prometheus Started 14:56:44 Container grafana Starting 14:56:45 Container grafana Started 14:56:46 Container postgres Started 14:56:46 Container policy-db-migrator Starting 14:56:48 Container zookeeper Started 14:56:48 Container kafka Starting 14:56:48 Container policy-db-migrator Started 14:56:48 Container policy-api Starting 14:56:49 Container kafka Started 14:56:51 Container policy-api Started 14:56:51 Container policy-pap Starting 14:56:53 Container policy-pap Started 14:56:53 Container policy-opa-pdp Starting 14:56:55 Container policy-opa-pdp Started 14:56:55 Prometheus server: http://localhost:30259 14:56:55 Grafana server: http://localhost:30269 14:56:55 Waiting 3 minutes for OPA-PDP to start... 14:59:55 Checking if REST port 30003 is open on localhost ... 14:59:55 IMAGE NAMES STATUS 14:59:55 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 14:59:55 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:59:55 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:59:55 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:59:55 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:59:55 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:59:55 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:59:55 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:59:55 Checking if REST port 30012 is open on localhost ... 14:59:55 IMAGE NAMES STATUS 14:59:55 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 14:59:55 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 14:59:55 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 14:59:55 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 14:59:55 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 14:59:55 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 14:59:55 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 14:59:55 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 14:59:55 Cloning into '/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/csit/resources/tests/models'... 14:59:56 Building robot framework docker image 15:00:37 sha256:ab89bce49b8fdad315df4d72d0a70d1e88b213c4bbf89b533da71e22302f6252 15:00:41 top - 15:00:41 up 7 min, 0 users, load average: 0.94, 1.41, 0.77 15:00:41 Tasks: 219 total, 1 running, 148 sleeping, 0 stopped, 0 zombie 15:00:41 %Cpu(s): 9.4 us, 2.2 sy, 0.0 ni, 82.4 id, 5.9 wa, 0.0 hi, 0.1 si, 0.1 st 15:00:41 15:00:41 total used free shared buff/cache available 15:00:41 Mem: 31G 2.5G 21G 28M 7.3G 28G 15:00:41 Swap: 1.0G 0B 1.0G 15:00:41 15:00:41 IMAGE NAMES STATUS 15:00:41 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 3 minutes 15:00:41 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 3 minutes 15:00:41 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 3 minutes 15:00:41 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 3 minutes 15:00:41 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 3 minutes 15:00:41 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 3 minutes 15:00:41 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 3 minutes 15:00:41 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 3 minutes 15:00:41 15:00:43 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 15:00:43 1f04a4f38824 policy-opa-pdp 0.17% 11.59MiB / 31.41GiB 0.04% 74.6kB / 70.6kB 0B / 0B 21 15:00:43 ce85b575396b policy-pap 0.64% 582.2MiB / 31.41GiB 1.81% 1.7MB / 993kB 0B / 139MB 70 15:00:43 42be2d504923 policy-api 0.12% 534.1MiB / 31.41GiB 1.66% 1.15MB / 1.09MB 0B / 0B 59 15:00:43 5db484a4cb88 grafana 0.20% 110.6MiB / 31.41GiB 0.34% 19.1MB / 177kB 0B / 30.4MB 21 15:00:43 a4ba70d5583c kafka 2.76% 384.9MiB / 31.41GiB 1.20% 292kB / 279kB 0B / 692kB 83 15:00:43 9d9cce903718 zookeeper 0.08% 85.63MiB / 31.41GiB 0.27% 64.3kB / 54.5kB 0B / 356kB 61 15:00:43 5ebc6de93501 postgres 0.00% 87.76MiB / 31.41GiB 0.27% 2.33MB / 3.25MB 0B / 160MB 26 15:00:43 b3477b242c9a prometheus 0.20% 22.02MiB / 31.41GiB 0.07% 236kB / 10.1kB 229kB / 0B 13 15:00:43 15:00:43 Container policy-csit Creating 15:00:43 Container policy-csit Created 15:00:43 Attaching to policy-csit 15:00:44 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 15:00:44 policy-csit | Run Robot test 15:00:44 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 15:00:44 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 15:00:44 policy-csit | -v POLICY_API_IP:policy-api:6969 15:00:44 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 15:00:44 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 15:00:44 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 15:00:44 policy-csit | -v APEX_IP:policy-apex-pdp:6969 15:00:44 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 15:00:44 policy-csit | -v KAFKA_IP:kafka:9092 15:00:44 policy-csit | -v PROMETHEUS_IP:prometheus:9090 15:00:44 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 15:00:44 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 15:00:44 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 15:00:44 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 15:00:44 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 15:00:44 policy-csit | -v TEMP_FOLDER:/tmp/distribution 15:00:44 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 15:00:44 policy-csit | -v TEST_ENV:docker 15:00:44 policy-csit | -v JAEGER_IP:jaeger:16686 15:00:44 policy-csit | Starting Robot test suites ... 15:00:44 policy-csit | ============================================================================== 15:00:44 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 15:00:44 policy-csit | ============================================================================== 15:00:45 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 15:00:45 policy-csit | ============================================================================== 15:00:45 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 15:00:45 policy-csit | ------------------------------------------------------------------------------ 15:00:45 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 15:00:45 policy-csit | ------------------------------------------------------------------------------ 15:01:11 policy-csit | ValidatesZonePolicy | PASS | 15:01:11 policy-csit | ------------------------------------------------------------------------------ 15:01:36 policy-csit | ValidatesVehiclePolicy | PASS | 15:01:36 policy-csit | ------------------------------------------------------------------------------ 15:02:01 policy-csit | ValidatesAbacPolicy | PASS | 15:02:01 policy-csit | ------------------------------------------------------------------------------ 15:02:01 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 15:02:01 policy-csit | 5 tests, 5 passed, 0 failed 15:02:01 policy-csit | ============================================================================== 15:02:01 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 15:02:01 policy-csit | ============================================================================== 15:03:01 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 15:03:01 policy-csit | ------------------------------------------------------------------------------ 15:03:01 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 15:03:01 policy-csit | ------------------------------------------------------------------------------ 15:03:01 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 15:03:01 policy-csit | ------------------------------------------------------------------------------ 15:03:01 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 15:03:01 policy-csit | ------------------------------------------------------------------------------ 15:03:01 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 15:03:01 policy-csit | ------------------------------------------------------------------------------ 15:03:01 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 15:03:01 policy-csit | 5 tests, 5 passed, 0 failed 15:03:01 policy-csit | ============================================================================== 15:03:01 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 15:03:01 policy-csit | 10 tests, 10 passed, 0 failed 15:03:01 policy-csit | ============================================================================== 15:03:01 policy-csit | Output: /tmp/results/output.xml 15:03:02 policy-csit | Log: /tmp/results/log.html 15:03:02 policy-csit | Report: /tmp/results/report.html 15:03:02 policy-csit | RESULT: 0 15:03:02 policy-csit exited with code 0 15:03:02 IMAGE NAMES STATUS 15:03:02 nexus3.onap.org:10001/onap/policy-opa-pdp:1.0.8-SNAPSHOT policy-opa-pdp Up 6 minutes 15:03:02 nexus3.onap.org:10001/onap/policy-pap:4.2.1-SNAPSHOT policy-pap Up 6 minutes 15:03:02 nexus3.onap.org:10001/onap/policy-api:4.2.1-SNAPSHOT policy-api Up 6 minutes 15:03:02 nexus3.onap.org:10001/grafana/grafana:latest grafana Up 6 minutes 15:03:02 nexus3.onap.org:10001/confluentinc/cp-kafka:7.4.9 kafka Up 6 minutes 15:03:02 nexus3.onap.org:10001/confluentinc/cp-zookeeper:latest zookeeper Up 6 minutes 15:03:02 nexus3.onap.org:10001/library/postgres:16.4 postgres Up 6 minutes 15:03:02 nexus3.onap.org:10001/prom/prometheus:latest prometheus Up 6 minutes 15:03:02 Shut down started! 15:03:04 Collecting logs from docker compose containers... 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694081923Z level=info msg="Starting Grafana" version=12.0.1 commit=80658a73c5355e3ed318e5e021c0866285153b57 branch=HEAD compiled=2025-06-13T14:56:45Z 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694580241Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694587641Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694592951Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694597102Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694600192Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694603622Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694607022Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694610352Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694613392Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694616142Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694619682Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694622592Z level=info msg=Target target=[all] 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694631143Z level=info msg="Path Home" path=/usr/share/grafana 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694635083Z level=info msg="Path Data" path=/var/lib/grafana 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694637843Z level=info msg="Path Logs" path=/var/log/grafana 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694643393Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694647393Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 15:03:04 grafana | logger=settings t=2025-06-13T14:56:45.694650023Z level=info msg="App mode production" 15:03:04 grafana | logger=featuremgmt t=2025-06-13T14:56:45.695153981Z level=info msg=FeatureToggles logRowsPopoverMenu=true prometheusAzureOverrideAudience=true onPremToCloudMigrations=true logsPanelControls=true ssoSettingsSAML=true unifiedRequestLog=true dashboardSceneForViewers=true tlsMemcached=true dashboardSceneSolo=true cloudWatchRoundUpEndTime=true influxdbBackendMigration=true alertingApiServer=true angularDeprecationUI=true preinstallAutoUpdate=true dashgpt=true alertingNotificationsStepMode=true failWrongDSUID=true azureMonitorPrometheusExemplars=true recoveryThreshold=true alertRuleRestore=true correlations=true lokiQueryHints=true newFiltersUI=true grafanaconThemes=true formatString=true lokiQuerySplitting=true alertingRuleRecoverDeleted=true transformationsRedesign=true cloudWatchNewLabelParsing=true promQLScope=true unifiedStorageSearchPermissionFiltering=true publicDashboardsScene=true newDashboardSharingComponent=true alertingRulePermanentlyDelete=true alertingUIOptimizeReducer=true nestedFolders=true cloudWatchCrossAccountQuerying=true annotationPermissionUpdate=true logsInfiniteScrolling=true lokiStructuredMetadata=true reportingUseRawTimeRange=true newPDFRendering=true panelMonitoring=true lokiLabelNamesQueryApi=true alertingQueryAndExpressionsStepMode=true kubernetesClientDashboardsFolders=true recordedQueriesMulti=true dataplaneFrontendFallback=true ssoSettingsApi=true dashboardScene=true prometheusUsesCombobox=true azureMonitorEnableUserAuth=true logsExploreTableVisualisation=true awsAsyncQueryCaching=true alertingInsights=true externalCorePlugins=true groupToNestedTableTransformation=true alertingRuleVersionHistoryRestore=true kubernetesPlaylists=true pinNavItems=true addFieldFromCalculationStatFunctions=true alertingSimplifiedRouting=true logsContextDatasourceUi=true useSessionStorageForRedirection=true pluginsDetailsRightPanel=true 15:03:04 grafana | logger=sqlstore t=2025-06-13T14:56:45.695214014Z level=info msg="Connecting to DB" dbtype=sqlite3 15:03:04 grafana | logger=sqlstore t=2025-06-13T14:56:45.695231644Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.697727773Z level=info msg="Locking database" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.697744964Z level=info msg="Starting DB migrations" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.698645826Z level=info msg="Executing migration" id="create migration_log table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.700294945Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.649759ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.721045564Z level=info msg="Executing migration" id="create user table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.72373694Z level=info msg="Migration successfully executed" id="create user table" duration=2.692986ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.748560885Z level=info msg="Executing migration" id="add unique index user.login" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.749953295Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.394529ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.756180267Z level=info msg="Executing migration" id="add unique index user.email" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.757183282Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.002626ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.762759571Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.763501707Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=741.146µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.84886595Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.850017311Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.154021ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.903760387Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.908269547Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.509801ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.94005285Z level=info msg="Executing migration" id="create user table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.942140235Z level=info msg="Migration successfully executed" id="create user table v2" duration=2.090674ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.980750621Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:45.982557375Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.802725ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.015935505Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.017916265Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=2.005701ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.025261037Z level=info msg="Executing migration" id="copy data_source v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.025687622Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=429.415µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.041387942Z level=info msg="Executing migration" id="Drop old table user_v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.042436049Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.051647ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.053812645Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.056526482Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.716396ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.088057125Z level=info msg="Executing migration" id="Update user table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.088149659Z level=info msg="Migration successfully executed" id="Update user table charset" duration=95.984µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.105920432Z level=info msg="Executing migration" id="Add last_seen_at column to user" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.107997886Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.076304ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.137696025Z level=info msg="Executing migration" id="Add missing user data" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.138182662Z level=info msg="Migration successfully executed" id="Add missing user data" duration=487.508µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.146390625Z level=info msg="Executing migration" id="Add is_disabled column to user" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.147721072Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.329768ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.177762363Z level=info msg="Executing migration" id="Add index user.login/user.email" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.179494634Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.728261ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.247140055Z level=info msg="Executing migration" id="Add is_service_account column to user" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.249633764Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.495799ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.329436589Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.337409573Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.971324ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.411888787Z level=info msg="Executing migration" id="Add uid column to user" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.414271742Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.384445ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.502516778Z level=info msg="Executing migration" id="Update uid column values for users" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.503066727Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=552.44µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.646447628Z level=info msg="Executing migration" id="Add unique index user_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.656317189Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=10.332268ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.705944228Z level=info msg="Executing migration" id="Add is_provisioned column to user" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.708497259Z level=info msg="Migration successfully executed" id="Add is_provisioned column to user" duration=2.558361ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.783002335Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.786306283Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=3.307427ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.806606826Z level=info msg="Executing migration" id="update service accounts login field orgid to appear only once" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.807397734Z level=info msg="Migration successfully executed" id="update service accounts login field orgid to appear only once" duration=791.368µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.819528357Z level=info msg="Executing migration" id="update login and email fields to lowercase" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.822967709Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase" duration=3.443343ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.856874658Z level=info msg="Executing migration" id="update login and email fields to lowercase2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.857629705Z level=info msg="Migration successfully executed" id="update login and email fields to lowercase2" duration=755.617µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.903695907Z level=info msg="Executing migration" id="create temp user table v1-7" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.905533172Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.839946ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.913231136Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.914135549Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=908.413µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.938080892Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.939380599Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.299406ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.956835041Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:46.958292153Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.456541ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:47.074498097Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:47.076053863Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.560736ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:47.634421737Z level=info msg="Executing migration" id="Update temp_user table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:47.63452188Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=106.474µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.162970465Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.164158748Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.191603ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.323842828Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.32473683Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=895.882µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.389022069Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.390208052Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.186223ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.484702601Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.485994057Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.293526ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.538124761Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.544465218Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=6.337447ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.566559768Z level=info msg="Executing migration" id="create temp_user v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.568419925Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.862097ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.581030176Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.58226531Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.237324ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.714854022Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.716335434Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.482073ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.776479435Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.778202157Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.728592ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.822548863Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.824007395Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.460962ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.931544361Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.932430192Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=888.752µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.960788006Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.961488362Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=703.265µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.966025664Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.966317784Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=292.06µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.976472737Z level=info msg="Executing migration" id="create star table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:48.977208914Z level=info msg="Migration successfully executed" id="create star table" duration=738.547µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.007857099Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.008966009Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.111639ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.052722603Z level=info msg="Executing migration" id="Add column dashboard_uid in star" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.054265368Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in star" duration=1.546565ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.065075392Z level=info msg="Executing migration" id="Add column org_id in star" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.066384258Z level=info msg="Migration successfully executed" id="Add column org_id in star" duration=1.302246ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.073133668Z level=info msg="Executing migration" id="Add column updated in star" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.074579009Z level=info msg="Migration successfully executed" id="Add column updated in star" duration=1.444781ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.085129634Z level=info msg="Executing migration" id="add index in star table on dashboard_uid, org_id and user_id columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.086867316Z level=info msg="Migration successfully executed" id="add index in star table on dashboard_uid, org_id and user_id columns" duration=1.736992ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.09852072Z level=info msg="Executing migration" id="create org table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.099279317Z level=info msg="Migration successfully executed" id="create org table v1" duration=758.957µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.109055574Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.109991457Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=935.073µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.138116916Z level=info msg="Executing migration" id="create org_user table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.140048585Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.923008ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.176955416Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.178379257Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.428641ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.1877582Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.188870779Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.112849ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.234585703Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.236259863Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.67947ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.24940923Z level=info msg="Executing migration" id="Update org table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.249454331Z level=info msg="Migration successfully executed" id="Update org table charset" duration=45.861µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.322436054Z level=info msg="Executing migration" id="Update org_user table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.322531677Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=94.953µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.39973042Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.400361132Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=634.313µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.495543563Z level=info msg="Executing migration" id="create dashboard table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.497686329Z level=info msg="Migration successfully executed" id="create dashboard table" duration=2.144906ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.506477872Z level=info msg="Executing migration" id="add index dashboard.account_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.507703705Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.221803ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.811969934Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.813864501Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.897187ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.858616671Z level=info msg="Executing migration" id="create dashboard_tag table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.859569935Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=956.334µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.894107952Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:49.895851513Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.746672ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.008486485Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.010198386Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.716071ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.109837788Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.116508946Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.671677ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.178865268Z level=info msg="Executing migration" id="create dashboard v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.18060394Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.745992ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.202957867Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.204558164Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.603017ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.249641831Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.251288979Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.649718ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.270878818Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.271415267Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=536.849µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.424220673Z level=info msg="Executing migration" id="drop table dashboard_v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.428030579Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=3.853968ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.607755205Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:50.607811397Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=55.792µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.049368814Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.062071507Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=12.699522ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.25940652Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.261430982Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.025382ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.35562522Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.357915801Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.291742ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.582733514Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.586135845Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=3.407511ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.82952295Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:51.833356667Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.836267ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.065435222Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.06680693Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.373569ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.212817467Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.213809362Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=994.425µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.374702138Z level=info msg="Executing migration" id="Update dashboard table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.374786491Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=89.183µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.699127382Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.699209315Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=88.823µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.90766379Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:52.911280579Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.623739ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.1441187Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.150701592Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=6.582501ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.278678562Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.282429864Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.753402ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.458579628Z level=info msg="Executing migration" id="Add column uid in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.46259766Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=4.022832ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.543390341Z level=info msg="Executing migration" id="Update uid column values in dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.543763664Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=376.353µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.677860629Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.684791373Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=6.953144ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.734349516Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:53.735668222Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.337897ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:54.21610738Z level=info msg="Executing migration" id="Update dashboard title length" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:54.216182212Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=72.793µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:55.023338628Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:55.025128982Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.801024ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:55.72827827Z level=info msg="Executing migration" id="create dashboard_provisioning" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:55.729744092Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.467962ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.2502095Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.256808755Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.597065ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.519447006Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.520980781Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.560986ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.546549732Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.548125868Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.577596ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.613503899Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.614995062Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.493524ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.690845805Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.692410921Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=1.567646ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.73250456Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.733785056Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.279446ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.885226423Z level=info msg="Executing migration" id="Add check_sum column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.89383447Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=8.610707ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:56.999716854Z level=info msg="Executing migration" id="Add index for dashboard_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.000852975Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.138751ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.226603218Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.227434828Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=833.739µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.233590408Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.234032383Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=444.085µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.239705026Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.241456489Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.753503ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.300066095Z level=info msg="Executing migration" id="Add isPublic for dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.30245272Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.388615ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.391129101Z level=info msg="Executing migration" id="Add deleted for dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.393122453Z level=info msg="Migration successfully executed" id="Add deleted for dashboard" duration=2.461978ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.431502435Z level=info msg="Executing migration" id="Add index for deleted" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.432641186Z level=info msg="Migration successfully executed" id="Add index for deleted" duration=1.142121ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.455563516Z level=info msg="Executing migration" id="Add column dashboard_uid in dashboard_tag" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.45960524Z level=info msg="Migration successfully executed" id="Add column dashboard_uid in dashboard_tag" duration=4.039894ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.46659878Z level=info msg="Executing migration" id="Add column org_id in dashboard_tag" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.46965412Z level=info msg="Migration successfully executed" id="Add column org_id in dashboard_tag" duration=3.05657ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.48978837Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to dashboard_tag" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.491655926Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to dashboard_tag" duration=1.868046ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.516202444Z level=info msg="Executing migration" id="Add apiVersion for dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.519376748Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard" duration=3.175364ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.599061997Z level=info msg="Executing migration" id="Add index for dashboard_uid on dashboard_tag table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.601175123Z level=info msg="Migration successfully executed" id="Add index for dashboard_uid on dashboard_tag table" duration=2.115066ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.607941795Z level=info msg="Executing migration" id="Add missing dashboard_uid and org_id to star" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.609311404Z level=info msg="Migration successfully executed" id="Add missing dashboard_uid and org_id to star" duration=1.367639ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.616097177Z level=info msg="Executing migration" id="create data_source table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.617496907Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.39965ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.634067739Z level=info msg="Executing migration" id="add index data_source.account_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.636074511Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=2.006662ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.654256461Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.655678342Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.423321ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.713922815Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.715319105Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.40328ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.730322482Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.731785414Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.470803ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.761170465Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.772289192Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.119867ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.802771222Z level=info msg="Executing migration" id="create data_source table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.803959455Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.190113ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.832703653Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.834437015Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.733712ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.869851031Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.87176567Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.915959ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.890837742Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.892186Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.341978ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.90811763Z level=info msg="Executing migration" id="Add column with_credentials" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.911415408Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.297458ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.919425084Z level=info msg="Executing migration" id="Add secure json data column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.923256611Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.832617ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.954675954Z level=info msg="Executing migration" id="Update data_source table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.954747446Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=74.652µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.964024988Z level=info msg="Executing migration" id="Update initial version to 1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:57.964486385Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=460.726µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.014467332Z level=info msg="Executing migration" id="Add read_only data column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.017902065Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.437113ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.173290112Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.173706487Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=418.865µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.284957025Z level=info msg="Executing migration" id="Update json_data with nulls" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.28537225Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=413.895µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.333017084Z level=info msg="Executing migration" id="Add uid column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.338165028Z level=info msg="Migration successfully executed" id="Add uid column" duration=5.147774ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.372947782Z level=info msg="Executing migration" id="Update uid value" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.373529803Z level=info msg="Migration successfully executed" id="Update uid value" duration=586.21µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.415929239Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.418049885Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.126996ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.457954072Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.459793188Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.840345ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.502299988Z level=info msg="Executing migration" id="Add is_prunable column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.506732936Z level=info msg="Migration successfully executed" id="Add is_prunable column" duration=4.436429ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.563114602Z level=info msg="Executing migration" id="Add api_version column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.568172193Z level=info msg="Migration successfully executed" id="Add api_version column" duration=5.060551ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.659542771Z level=info msg="Executing migration" id="Update secure_json_data column to MediumText" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.659587852Z level=info msg="Migration successfully executed" id="Update secure_json_data column to MediumText" duration=49.391µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.708327235Z level=info msg="Executing migration" id="create api_key table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.709774707Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.446402ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.748848385Z level=info msg="Executing migration" id="add index api_key.account_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.7504075Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.559676ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.756214848Z level=info msg="Executing migration" id="add index api_key.key" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.757170922Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=955.344µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.877660621Z level=info msg="Executing migration" id="add index api_key.account_id_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.87931022Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.620078ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.984954338Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:58.986066538Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.11545ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.071384899Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.07281864Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.434811ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.224534466Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.225983778Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.451602ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.323957631Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.33566518Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.74042ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.421193409Z level=info msg="Executing migration" id="create api_key table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.422269377Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.075318ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.531076028Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.532774609Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.699501ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.636127705Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.638661856Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=2.52841ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.665782516Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.667377163Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.597867ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.849787846Z level=info msg="Executing migration" id="copy api_key v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.850797012Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=1.012826ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.917403744Z level=info msg="Executing migration" id="Drop old table api_key_v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:56:59.918905598Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.500383ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.023507678Z level=info msg="Executing migration" id="Update api_key table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.023588051Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=86.603µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.163013617Z level=info msg="Executing migration" id="Add expires to api_key table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.167947194Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.937387ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.217973533Z level=info msg="Executing migration" id="Add service account foreign key" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.222985102Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=5.01341ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.270335525Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.270960128Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=600.631µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.458013457Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.462184116Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.173129ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.5361166Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.538762325Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.647634ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.574815444Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.576455223Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.645438ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.695159888Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.696569628Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.412041ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.846821141Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.848056695Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.249534ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.946709643Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.948517918Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.809385ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.95919899Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.960127773Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=928.553µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.96452037Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:00.965794416Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.276556ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.155695267Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.155734028Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=44.691µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.181287362Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.181350955Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=65.682µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.227856558Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.231380654Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.527677ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.279228085Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.284993451Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=5.766146ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.370879342Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.370936224Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=72.152µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.442207543Z level=info msg="Executing migration" id="create quota table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.44378991Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.578996ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.585107812Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.586198101Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.092579ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.598258073Z level=info msg="Executing migration" id="Update quota table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.598295054Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=37.871µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.735287373Z level=info msg="Executing migration" id="create plugin_setting table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.737249063Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.96823ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.790927953Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.792672945Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.747152ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.845768834Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.851006521Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.240787ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.9259085Z level=info msg="Executing migration" id="Update plugin_setting table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.925994963Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=89.273µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.993453116Z level=info msg="Executing migration" id="update NULL org_id to 1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:01.99386759Z level=info msg="Migration successfully executed" id="update NULL org_id to 1" duration=419.535µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.067093329Z level=info msg="Executing migration" id="make org_id NOT NULL and DEFAULT VALUE 1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.077481341Z level=info msg="Migration successfully executed" id="make org_id NOT NULL and DEFAULT VALUE 1" duration=10.390591ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.173712502Z level=info msg="Executing migration" id="create session table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.175127592Z level=info msg="Migration successfully executed" id="create session table" duration=1.41508ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.23433954Z level=info msg="Executing migration" id="Drop old table playlist table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.234687012Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=336.462µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.34227687Z level=info msg="Executing migration" id="Drop old table playlist_item table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.342480677Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=207.557µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.393431199Z level=info msg="Executing migration" id="create playlist table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.39456229Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.131051ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.436141547Z level=info msg="Executing migration" id="create playlist item table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.437506006Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.363638ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.593566557Z level=info msg="Executing migration" id="Update playlist table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.593617958Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=54.992µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.68790825Z level=info msg="Executing migration" id="Update playlist_item table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.687978193Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=73.153µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.764568082Z level=info msg="Executing migration" id="Add playlist column created_at" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.768109978Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.544316ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.913547199Z level=info msg="Executing migration" id="Add playlist column updated_at" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:02.91691673Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.372341ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.043825178Z level=info msg="Executing migration" id="drop preferences table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.044062307Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=241.109µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.111171937Z level=info msg="Executing migration" id="drop preferences table v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.111368554Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=198.757µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.137438486Z level=info msg="Executing migration" id="create preferences table v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.138634399Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.196563ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.159121742Z level=info msg="Executing migration" id="Update preferences table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.159166583Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=46.792µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.253125283Z level=info msg="Executing migration" id="Add column team_id in preferences" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.258693652Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.532198ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.389354505Z level=info msg="Executing migration" id="Update team_id column values in preferences" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.389690767Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=339.832µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.407547696Z level=info msg="Executing migration" id="Add column week_start in preferences" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.412823014Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.251878ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.446852581Z level=info msg="Executing migration" id="Add column preferences.json_data" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.453097725Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=6.243303ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.493878773Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.493913564Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=37.421µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.544085408Z level=info msg="Executing migration" id="Add preferences index org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.545877683Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.794064ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.620557123Z level=info msg="Executing migration" id="Add preferences index user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.622359328Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.800884ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.645855368Z level=info msg="Executing migration" id="create alert table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.647546608Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.69098ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.722600862Z level=info msg="Executing migration" id="add index alert org_id & id " 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.724082835Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.475423ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.843632981Z level=info msg="Executing migration" id="add index alert state" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.845676944Z level=info msg="Migration successfully executed" id="add index alert state" duration=2.048894ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.883527627Z level=info msg="Executing migration" id="add index alert dashboard_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.88499201Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.465333ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.951845911Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:03.952923319Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.074119ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.057961075Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.058878218Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=918.953µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.171200225Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.172363707Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.167502ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.259229793Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.267628113Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.40242ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.385241329Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.386502655Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.265146ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.544588418Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.547281714Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=2.695586ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.623237431Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.624033289Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=798.259µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.681155752Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.682458488Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.297786ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.754051689Z level=info msg="Executing migration" id="create alert_notification table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.755749299Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.700681ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.822978074Z level=info msg="Executing migration" id="Add column is_default" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.826638325Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.664651ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.945840557Z level=info msg="Executing migration" id="Add column frequency" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:04.951423167Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.58602ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.003840932Z level=info msg="Executing migration" id="Add column send_reminder" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.011196135Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=7.351273ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.083564313Z level=info msg="Executing migration" id="Add column disable_resolve_message" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.089969932Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.40727ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.154619933Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.157165844Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=2.552201ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.260925174Z level=info msg="Executing migration" id="Update alert table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.260989447Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=73.243µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.313484114Z level=info msg="Executing migration" id="Update alert_notification table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.313549586Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=68.852µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.380908475Z level=info msg="Executing migration" id="create notification_journal table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.382362117Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.452722ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.425310743Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.427012734Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.701081ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.470284701Z level=info msg="Executing migration" id="drop alert_notification_journal" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.471455313Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.170242ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.530745343Z level=info msg="Executing migration" id="create alert_notification_state table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.53232753Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.584567ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.540927728Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.543126246Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=2.196159ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.59832316Z level=info msg="Executing migration" id="Add for to alert table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.60306169Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.735879ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.647338563Z level=info msg="Executing migration" id="Add column uid in alert_notification" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.654278921Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.943648ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.753287262Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.754124592Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=871.511µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.919141453Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:05.921144355Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=2.005922ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.057767591Z level=info msg="Executing migration" id="Remove unique index org_id_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.059086978Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.313947ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.161340704Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.168384836Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=7.047382ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.282874931Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.282924963Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=45.501µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.386982614Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.388601812Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.621358ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.530460555Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.531931487Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.473202ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.689095888Z level=info msg="Executing migration" id="Drop old annotation table v4" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.689323796Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=231.018µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.837008137Z level=info msg="Executing migration" id="create annotation table v5" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:06.839006239Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.999712ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.059725682Z level=info msg="Executing migration" id="add index annotation 0 v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.061542447Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.817975ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.343302063Z level=info msg="Executing migration" id="add index annotation 1 v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.344866889Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.566366ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.470015615Z level=info msg="Executing migration" id="add index annotation 2 v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.474824007Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=4.809572ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.69470844Z level=info msg="Executing migration" id="add index annotation 3 v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.696519545Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.813795ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.786964389Z level=info msg="Executing migration" id="add index annotation 4 v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.799965524Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=13.003595ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.908745634Z level=info msg="Executing migration" id="Update annotation table charset" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.908830537Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=88.263µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.962389423Z level=info msg="Executing migration" id="Add column region_id to annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.968609885Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.220822ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.996467111Z level=info msg="Executing migration" id="Drop category_id index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:07.998293007Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.826006ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.100697129Z level=info msg="Executing migration" id="Add column tags to annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.105987508Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.293229ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.118389151Z level=info msg="Executing migration" id="Create annotation_tag table v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.119249992Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=860.131µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.148953345Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.149879828Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=926.294µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.262796516Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.264241447Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.441021ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.350372978Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.363790747Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.42237ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.47179503Z level=info msg="Executing migration" id="Create annotation_tag table v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.473616515Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.825715ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.553769631Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.555530154Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.762873ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.626670588Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.627504698Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=836.89µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.722306327Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.723812991Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.506944ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.763332915Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.763889295Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=556.029µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.797933782Z level=info msg="Executing migration" id="Add created time to annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.805891357Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.259679ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.871307726Z level=info msg="Executing migration" id="Add updated time to annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.881065345Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=9.763119ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.992677896Z level=info msg="Executing migration" id="Add index for created in annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:08.995506608Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=2.832101ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.152324786Z level=info msg="Executing migration" id="Add index for updated in annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.155702036Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=3.35439ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.351053812Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.351893812Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=861.18µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.403386084Z level=info msg="Executing migration" id="Add epoch_end column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.411861987Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=8.477233ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.431632074Z level=info msg="Executing migration" id="Add index for epoch_end" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.433623285Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.991121ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.574832885Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.575234779Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=404.794µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.669083366Z level=info msg="Executing migration" id="Move region to single row" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.670515867Z level=info msg="Migration successfully executed" id="Move region to single row" duration=1.421231ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.794973088Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.797181087Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=2.21141ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.905720618Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.907909856Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=2.218259ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.970390121Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:09.973604806Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=3.241856ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.058200321Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.06012839Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.928319ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.153000151Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.154665041Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.654409ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.305554267Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.307193915Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.643498ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.38310104Z level=info msg="Executing migration" id="Increase tags column to length 4096" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.383204224Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=108.814µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.489238256Z level=info msg="Executing migration" id="Increase prev_state column to length 40 not null" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.489317978Z level=info msg="Migration successfully executed" id="Increase prev_state column to length 40 not null" duration=84.473µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.567675451Z level=info msg="Executing migration" id="Increase new_state column to length 40 not null" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.567751443Z level=info msg="Migration successfully executed" id="Increase new_state column to length 40 not null" duration=79.493µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.590823578Z level=info msg="Executing migration" id="create test_data table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.592119745Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.299387ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.650541664Z level=info msg="Executing migration" id="create dashboard_version table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.651996676Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.456662ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.707807612Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.709244413Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.436251ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.799395757Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.80114119Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.748283ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.916652061Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:10.917101667Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=453.077µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.078351583Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.079076249Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=727.366µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.168098943Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.168153725Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=51.181µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.23373487Z level=info msg="Executing migration" id="Add apiVersion for dashboard_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.237332699Z level=info msg="Migration successfully executed" id="Add apiVersion for dashboard_version" duration=3.599099ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.341444482Z level=info msg="Executing migration" id="create team table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.343114742Z level=info msg="Migration successfully executed" id="create team table" duration=1.673859ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.441656616Z level=info msg="Executing migration" id="add index team.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.443261053Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.607998ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.536143905Z level=info msg="Executing migration" id="add unique index team_org_id_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.540957907Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=4.804872ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.655563275Z level=info msg="Executing migration" id="Add column uid in team" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.664014337Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.454302ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.719625646Z level=info msg="Executing migration" id="Update uid column values in team" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.720112134Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=490.037µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.737691792Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.739325261Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.633589ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.830854574Z level=info msg="Executing migration" id="Add column external_uid in team" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.839309236Z level=info msg="Migration successfully executed" id="Add column external_uid in team" duration=8.448872ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.893761944Z level=info msg="Executing migration" id="Add column is_provisioned in team" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.901114576Z level=info msg="Migration successfully executed" id="Add column is_provisioned in team" duration=7.352033ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.909743375Z level=info msg="Executing migration" id="create team member table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.910318726Z level=info msg="Migration successfully executed" id="create team member table" duration=571.95µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.939586062Z level=info msg="Executing migration" id="add index team_member.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.941873974Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=2.285762ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.973678741Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:11.976234653Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.557282ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.056640548Z level=info msg="Executing migration" id="add index team_member.team_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.058746154Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=2.109555ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.084635429Z level=info msg="Executing migration" id="Add column email to team table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.093543808Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.911219ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.103879868Z level=info msg="Executing migration" id="Add column external to team_member table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.113298894Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=9.418026ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.13218712Z level=info msg="Executing migration" id="Add column permission to team_member table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.141386369Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=9.203169ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.199876711Z level=info msg="Executing migration" id="add unique index team_member_user_id_org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.20266536Z level=info msg="Migration successfully executed" id="add unique index team_member_user_id_org_id" duration=2.79002ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.244409183Z level=info msg="Executing migration" id="create dashboard acl table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.246177616Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.767673ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.308630679Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.310772565Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.144706ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.434741619Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.435804227Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.064918ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.55361778Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.555313731Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.543265ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.602455276Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.604202989Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.754973ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.612407402Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.614114003Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.706181ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.648774533Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.650695802Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.920858ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.824721935Z level=info msg="Executing migration" id="add index dashboard_permission" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.826159976Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.442811ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.9669072Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:12.96831261Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.40977ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.2496298Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.250251993Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=626.593µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.453662397Z level=info msg="Executing migration" id="create tag table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.455213822Z level=info msg="Migration successfully executed" id="create tag table" duration=1.557175ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.616638935Z level=info msg="Executing migration" id="add index tag.key_value" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.618760411Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=2.123606ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.759081399Z level=info msg="Executing migration" id="create login attempt table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:13.760058144Z level=info msg="Migration successfully executed" id="create login attempt table" duration=974.275µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.010809701Z level=info msg="Executing migration" id="add index login_attempt.username" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.013071642Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=2.265781ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.119004671Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.120724182Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.721502ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.234732219Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.252248916Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.520216ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.328089578Z level=info msg="Executing migration" id="create login_attempt v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.329609262Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.520924ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.43244592Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.43440569Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.991861ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.513463687Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.513913463Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=450.446µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.520513209Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.521422602Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=907.752µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.530578539Z level=info msg="Executing migration" id="create user auth table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.531956718Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.378709ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.603842879Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.605394325Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.555355ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.613537646Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.613599158Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=62.552µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.618789964Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.624420285Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.629531ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.672560507Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.682283474Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=9.736288ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.736452331Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.74676209Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=10.313249ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.823981632Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.83595959Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=11.978029ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.981832597Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:14.984213502Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=2.385606ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.060819701Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.071209243Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=10.393572ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.162124124Z level=info msg="Executing migration" id="Add user_unique_id to user_auth" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.17543999Z level=info msg="Migration successfully executed" id="Add user_unique_id to user_auth" duration=13.218513ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.273789277Z level=info msg="Executing migration" id="create server_lock table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.275240989Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.454682ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.378623456Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.381126666Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.50583ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.547932051Z level=info msg="Executing migration" id="create user auth token table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.550118469Z level=info msg="Migration successfully executed" id="create user auth token table" duration=2.188788ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.662919583Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.665692392Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.772659ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.674076532Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.675276055Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.199193ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.679144924Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.681509868Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=2.365995ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.772012725Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.781077869Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.068765ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.803059405Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.807531805Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=4.443759ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.82110042Z level=info msg="Executing migration" id="add external_session_id to user_auth_token" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.829254822Z level=info msg="Migration successfully executed" id="add external_session_id to user_auth_token" duration=8.158512ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.905538929Z level=info msg="Executing migration" id="create cache_data table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.908571527Z level=info msg="Migration successfully executed" id="create cache_data table" duration=3.060809ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.994348655Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:15.999410686Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=5.57713ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.100616195Z level=info msg="Executing migration" id="create short_url table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.102902357Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=2.286752ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.249348014Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.252090822Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.745518ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.355327954Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.355821481Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=498.237µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.485607993Z level=info msg="Executing migration" id="delete alert_definition table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.486028508Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=422.395µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.567967598Z level=info msg="Executing migration" id="recreate alert_definition table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.570722127Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=2.757319ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.677330769Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.679237627Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.909488ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.798957259Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.801738698Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.78537ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.983341693Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:16.983432016Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=87.053µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.027490511Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.029547585Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.060084ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.157292483Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.159510203Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=2.221789ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.22403298Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.226415605Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.375805ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.253201703Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.255564038Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=2.363004ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.263828953Z level=info msg="Executing migration" id="Add column paused in alert_definition" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.273680846Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.851722ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.324528454Z level=info msg="Executing migration" id="drop alert_definition table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.326640689Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.115515ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.374715669Z level=info msg="Executing migration" id="delete alert_definition_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.375122993Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=409.885µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.441794718Z level=info msg="Executing migration" id="recreate alert_definition_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.443745107Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.94973ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.455486117Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.456897108Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.420091ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.462743697Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.46451125Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.766933ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.4700984Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.470122191Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=24.52µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.473484231Z level=info msg="Executing migration" id="drop alert_definition_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.474929752Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.445081ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.492507131Z level=info msg="Executing migration" id="create alert_instance table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.494483172Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.975631ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.498827037Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.500982844Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=2.159407ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.505528167Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.506513022Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=983.935µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.509200838Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.515290356Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.088798ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.518062785Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.519921911Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.858726ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.522773373Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.523717497Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=943.654µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.526803368Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.55371839Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.906352ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.556618294Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.583835267Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.209353ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.587357043Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.588154972Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=793.888µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.591944307Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.593120099Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.175232ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.598012714Z level=info msg="Executing migration" id="add current_reason column related to current_state" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.604309329Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.296285ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.617823093Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.625614911Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.793789ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.629165568Z level=info msg="Executing migration" id="create alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.630256807Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.090799ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.633623278Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.635244846Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.620147ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.639080593Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.640917308Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.835875ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.644845159Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.645895336Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.049867ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.64851865Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.648537751Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=19.741µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.651426594Z level=info msg="Executing migration" id="add column for to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.658072852Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.648038ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.660831811Z level=info msg="Executing migration" id="add column annotations to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.666332867Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.500346ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.669654956Z level=info msg="Executing migration" id="add column labels to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.675970352Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.314846ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.678901117Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.679716036Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=814.679µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.682116322Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.683086216Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=972.454µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.686356763Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.69184916Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.491787ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.695539022Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.701672601Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.132769ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.705082473Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.706053518Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=970.665µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.708792456Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.714984547Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.191091ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.719369014Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.724909942Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.540838ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.758019436Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.758064528Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=50.332µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.761050495Z level=info msg="Executing migration" id="create alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.762252178Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.202053ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.765584997Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.76651359Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=928.433µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.769364702Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.770258814Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=894.552µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.774769395Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.774787666Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=18.341µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.777227263Z level=info msg="Executing migration" id="add column for to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.783697895Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.469321ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.786834477Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.793340499Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.505892ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.80229701Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.808597285Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.299745ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.813356265Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.818967426Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.609591ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.822285214Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.830730136Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.443182ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.835095373Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.835130034Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=27.301µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.838986812Z level=info msg="Executing migration" id=create_alert_configuration_table 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.840306959Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.319847ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.845899469Z level=info msg="Executing migration" id="Add column default in alert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.852841547Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.941888ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.856191277Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.856207608Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=16.961µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.860605875Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.865721068Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.113683ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.903317192Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.904632959Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.315077ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.908320461Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.916723332Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.403231ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.920983824Z level=info msg="Executing migration" id=create_ngalert_configuration_table 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.921756832Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=772.488µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.924952626Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.926005154Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.051188ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.929637624Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.937485854Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.847561ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.943389135Z level=info msg="Executing migration" id="create provenance_type table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.944377751Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=992.636µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.947486982Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.948635163Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.147601ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.951861858Z level=info msg="Executing migration" id="create alert_image table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.952777141Z level=info msg="Migration successfully executed" id="create alert_image table" duration=914.093µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.958599689Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.959880245Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.279146ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.964075455Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.964100276Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=25.971µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.968110459Z level=info msg="Executing migration" id=create_alert_configuration_history_table 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.968879587Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=768.488µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.975330838Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.976268831Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=937.753µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.979609051Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.980122599Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.983955976Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.984445614Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=489.167µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.988494838Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.989881198Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.38513ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:17.994327017Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.000097903Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.769646ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.004235751Z level=info msg="Executing migration" id="create library_element table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.005190645Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=954.504µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.008797214Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.009982397Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.184193ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.01314478Z level=info msg="Executing migration" id="create library_element_connection table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.013895337Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=750.117µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.03439374Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.036372681Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.98171ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.040978775Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.042127206Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.148031ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.048064239Z level=info msg="Executing migration" id="increase max description length to 2048" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.04809351Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=29.511µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.053599627Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.053619827Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=20.38µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.059498968Z level=info msg="Executing migration" id="add library_element folder uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.069803716Z level=info msg="Migration successfully executed" id="add library_element folder uid" duration=10.304979ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.075543671Z level=info msg="Executing migration" id="populate library_element folder_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.076225926Z level=info msg="Migration successfully executed" id="populate library_element folder_uid" duration=682.975µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.080916143Z level=info msg="Executing migration" id="add index library_element org_id-folder_uid-name-kind" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.083877989Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_uid-name-kind" duration=2.980566ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.088580288Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.089718958Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=1.138461ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.095164403Z level=info msg="Executing migration" id="create data_keys table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.096500221Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.334738ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.119032587Z level=info msg="Executing migration" id="create secrets table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.120695616Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.67721ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.132204288Z level=info msg="Executing migration" id="rename data_keys name column to id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.172648844Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=40.433815ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.178378009Z level=info msg="Executing migration" id="add name column into data_keys" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.186867212Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.489213ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.190578685Z level=info msg="Executing migration" id="copy data_keys id column values into name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.190787243Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=207.368µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.195766171Z level=info msg="Executing migration" id="rename data_keys name column to label" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.231299741Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.52147ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.245988137Z level=info msg="Executing migration" id="rename data_keys id column back to name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.280405638Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=34.41431ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.395823655Z level=info msg="Executing migration" id="create kv_store table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.399968513Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=4.132628ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.411838588Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.414156591Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.320703ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.507590182Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.51060998Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=3.020378ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.702611666Z level=info msg="Executing migration" id="create permission table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.704730442Z level=info msg="Migration successfully executed" id="create permission table" duration=2.129866ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.731312453Z level=info msg="Executing migration" id="add unique index permission.role_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.733554093Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.246551ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.848856946Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.851691288Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.837981ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.876705702Z level=info msg="Executing migration" id="create role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.877901235Z level=info msg="Migration successfully executed" id="create role table" duration=1.195423ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.903116907Z level=info msg="Executing migration" id="add column display_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.914911978Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.796652ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.96277911Z level=info msg="Executing migration" id="add column group_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.968466674Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.689224ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.989866629Z level=info msg="Executing migration" id="add index role.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:18.992016326Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=2.152367ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.009624876Z level=info msg="Executing migration" id="add unique index role_org_id_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.011400809Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.775204ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.027778825Z level=info msg="Executing migration" id="add index role_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.030099608Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=2.320273ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.086171743Z level=info msg="Executing migration" id="create team role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.08776253Z level=info msg="Migration successfully executed" id="create team role table" duration=1.591207ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.197474023Z level=info msg="Executing migration" id="add index team_role.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.199218146Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.743693ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.222372564Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.22450932Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.139057ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.269382165Z level=info msg="Executing migration" id="add index team_role.team_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.283064134Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=13.684229ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.390424333Z level=info msg="Executing migration" id="create user role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.391980659Z level=info msg="Migration successfully executed" id="create user role table" duration=1.556076ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.471498652Z level=info msg="Executing migration" id="add index user_role.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.47537307Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=3.876048ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.651644134Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.653972487Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.331503ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.708318781Z level=info msg="Executing migration" id="add index user_role.user_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.710195068Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.875797ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.792167989Z level=info msg="Executing migration" id="create builtin role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.794049327Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.882318ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.868557151Z level=info msg="Executing migration" id="add index builtin_role.role_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.875898334Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=7.348173ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.95016133Z level=info msg="Executing migration" id="add index builtin_role.name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:19.95212829Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.970211ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.016347136Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.023548484Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.204098ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.073391806Z level=info msg="Executing migration" id="add index builtin_role.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.075877525Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.486069ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.190220664Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.191674906Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.449072ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.293058142Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.296725173Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=3.669721ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.317460755Z level=info msg="Executing migration" id="add unique index role.uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.319424605Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.965391ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.412041167Z level=info msg="Executing migration" id="create seed assignment table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.416681773Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=4.648136ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.53265445Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.533877744Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.226254ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.570865727Z level=info msg="Executing migration" id="add column hidden to role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.584066569Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=13.200102ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.596323337Z level=info msg="Executing migration" id="permission kind migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.6064577Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.135453ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.663474269Z level=info msg="Executing migration" id="permission attribute migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.677079795Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=13.605196ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.786131425Z level=info msg="Executing migration" id="permission identifier migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.79604681Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.918545ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.899223229Z level=info msg="Executing migration" id="add permission identifier index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.903077367Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=3.857968ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.948324495Z level=info msg="Executing migration" id="add permission action scope role_id index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.949712055Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.38741ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.986616565Z level=info msg="Executing migration" id="remove permission role_id action scope index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:20.989361203Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=2.747599ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.05136786Z level=info msg="Executing migration" id="add group mapping UID column to user_role table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.074705725Z level=info msg="Migration successfully executed" id="add group mapping UID column to user_role table" duration=23.336545ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.133936543Z level=info msg="Executing migration" id="add user_role org ID, user ID, role ID, group mapping UID index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.137006283Z level=info msg="Migration successfully executed" id="add user_role org ID, user ID, role ID, group mapping UID index" duration=3.07203ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.244743266Z level=info msg="Executing migration" id="remove user_role org ID, user ID, role ID index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.246882252Z level=info msg="Migration successfully executed" id="remove user_role org ID, user ID, role ID index" duration=2.141846ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.369070122Z level=info msg="Executing migration" id="create query_history table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.371229769Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.160227ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.426956372Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.429901297Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.945465ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.47919661Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.479252542Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=60.432µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.508889272Z level=info msg="Executing migration" id="create query_history_details table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.5119034Z level=info msg="Migration successfully executed" id="create query_history_details table v1" duration=3.016198ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.596387001Z level=info msg="Executing migration" id="rbac disabled migrator" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.596602219Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=219.688µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.614231419Z level=info msg="Executing migration" id="teams permissions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.616067175Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.835796ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.647715787Z level=info msg="Executing migration" id="dashboard permissions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.654120566Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=6.40607ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.712836305Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.713686416Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=850.601µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.840630386Z level=info msg="Executing migration" id="drop managed folder create actions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.841126213Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=500.298µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.922054667Z level=info msg="Executing migration" id="alerting notification permissions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.923262551Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=1.210513ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.944098166Z level=info msg="Executing migration" id="create query_history_star table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.946630276Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=2.532161ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.964940791Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:21.967386539Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.448187ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.010347145Z level=info msg="Executing migration" id="add column org_id in query_history_star" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.036738549Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=26.375603ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.068531116Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.068565887Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=37.882µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.126965605Z level=info msg="Executing migration" id="create correlation table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.128513481Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.550636ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.137547184Z level=info msg="Executing migration" id="add index correlations.uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.138770777Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.223603ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.150105353Z level=info msg="Executing migration" id="add index correlations.source_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.151950519Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.844526ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.191990251Z level=info msg="Executing migration" id="add correlation config column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.202975874Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.986732ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.226283847Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.22805677Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.772443ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.264989131Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.26691225Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.925399ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.298173838Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.320854979Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.66823ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.351228455Z level=info msg="Executing migration" id="create correlation v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.353447115Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.217029ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.381411735Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.383909614Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.50128ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.398428653Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.400318201Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.889158ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.413949108Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.416831191Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.883833ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.55101955Z level=info msg="Executing migration" id="copy correlation v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.552246324Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=1.228254ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.574392426Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.57589299Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.500503ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.597403709Z level=info msg="Executing migration" id="add provisioning column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.606493344Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.093195ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.684307917Z level=info msg="Executing migration" id="add type column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.696289105Z level=info msg="Migration successfully executed" id="add type column" duration=11.978989ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.821448311Z level=info msg="Executing migration" id="create entity_events table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.82253985Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.094459ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.997714274Z level=info msg="Executing migration" id="create dashboard public config v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:22.999398634Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.688011ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.117045291Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.117773397Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.259033319Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.260119678Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.286209531Z level=info msg="Executing migration" id="Drop old dashboard public config table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.287605501Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.395209ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.57198213Z level=info msg="Executing migration" id="recreate dashboard public config v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.573983642Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.004582ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.703802494Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.706111577Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.312513ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.774293295Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.776809465Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.51574ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.845772341Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.847882217Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.110286ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.900444177Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.902337364Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.892518ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.949739949Z level=info msg="Executing migration" id="Drop public config table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.951459061Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.717322ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.9847157Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:23.987869303Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=2.547661ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.049087461Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.051551418Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.464047ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.068789237Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.071609826Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.82266ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.145362878Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.148129506Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.767498ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.247419589Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.273687856Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.273927ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.451997888Z level=info msg="Executing migration" id="add annotations_enabled column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.465218804Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=13.220516ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.536460278Z level=info msg="Executing migration" id="add time_selection_enabled column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.549941494Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=13.475956ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.589823401Z level=info msg="Executing migration" id="delete orphaned public dashboards" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.59063745Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=816.769µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.602757577Z level=info msg="Executing migration" id="add share column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.617146235Z level=info msg="Migration successfully executed" id="add share column" duration=14.387868ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.679445093Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.68019804Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=755.527µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.760177912Z level=info msg="Executing migration" id="create file table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.762730052Z level=info msg="Migration successfully executed" id="create file table" duration=2.570931ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.825379563Z level=info msg="Executing migration" id="file table idx: path natural pk" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.827610421Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.233459ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.862332707Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.863828129Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.498623ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.916871991Z level=info msg="Executing migration" id="create file_meta table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.918948814Z level=info msg="Migration successfully executed" id="create file_meta table" duration=2.097634ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.984656673Z level=info msg="Executing migration" id="file table idx: path key" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:24.986995445Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.349363ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.004594276Z level=info msg="Executing migration" id="set path collation in file table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.004851605Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=261.499µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.090804638Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.091118549Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=317.641µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.114438902Z level=info msg="Executing migration" id="managed permissions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.115311913Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=874.571µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.126414945Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.126802038Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=387.474µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.180809934Z level=info msg="Executing migration" id="RBAC action name migrator" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.183504779Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.697165ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.19657961Z level=info msg="Executing migration" id="Add UID column to playlist" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.208009614Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.430224ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.264205346Z level=info msg="Executing migration" id="Update uid column values in playlist" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.264753316Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=550.49µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.29691044Z level=info msg="Executing migration" id="Add index for uid in playlist" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.298773076Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.863916ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.307361229Z level=info msg="Executing migration" id="update group index for alert rules" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.30794067Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=580.731µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.340388605Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.340868341Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=481.797µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.380469039Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.381667371Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.194612ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.401977358Z level=info msg="Executing migration" id="add action column to seed_assignment" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.413127021Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.153424ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.423601571Z level=info msg="Executing migration" id="add scope column to seed_assignment" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.435523511Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.92181ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.452537682Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.454571143Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.033381ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.464655829Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.541842203Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.186304ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.552023892Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.554366305Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.346323ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.612478575Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.615495752Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=3.014276ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.665072181Z level=info msg="Executing migration" id="add primary key to seed_assigment" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.691179412Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.113471ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.749989537Z level=info msg="Executing migration" id="add origin column to seed_assignment" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.760294161Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=10.305244ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.830870191Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.831263845Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=395.304µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.869707761Z level=info msg="Executing migration" id="prevent seeding OnCall access" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.869889578Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=180.897µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.933062717Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.933348677Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=287.6µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.965551493Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.965777551Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=226.558µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.999587674Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:25.999904085Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=320.121µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.038431825Z level=info msg="Executing migration" id="create folder table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.039458581Z level=info msg="Migration successfully executed" id="create folder table" duration=1.028656ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.068867289Z level=info msg="Executing migration" id="Add index for parent_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.07004676Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.176181ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.099302383Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.100493745Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.190882ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.127346532Z level=info msg="Executing migration" id="Update folder title length" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.127400684Z level=info msg="Migration successfully executed" id="Update folder title length" duration=57.002µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.139208051Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.141060106Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.852605ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.165743287Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.16725588Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.512463ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.202721102Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.204834076Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.115965ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.214928422Z level=info msg="Executing migration" id="Sync dashboard and folder table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.215313826Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=378.944µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.237512649Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.237762368Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=249.759µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.251763252Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.252681905Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=918.332µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.272642009Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.274178643Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.539434ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.324183197Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.326294542Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.114325ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.383557792Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.385711508Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.155586ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.470338565Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.472304564Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.96034ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.525124098Z level=info msg="Executing migration" id="Remove unique index UQE_folder_org_id_parent_uid_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.527360377Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_org_id_parent_uid_title" duration=2.239299ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.64453166Z level=info msg="Executing migration" id="create anon_device table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.645632259Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.103299ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.822027713Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.824045744Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.020371ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.926854882Z level=info msg="Executing migration" id="add index anon_device.updated_at" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:26.928872933Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.020692ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.005192446Z level=info msg="Executing migration" id="create signing_key table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.006986639Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.796134ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.109244327Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.11046861Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.227233ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.196155174Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.197722499Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.567665ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.25783188Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.258507804Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=676.204µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.265427198Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.278543331Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.115603ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.339633136Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.340800788Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.169511ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.368587858Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.368630429Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=46.371µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.414889252Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.41625696Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.370578ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.478474985Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.478522937Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=51.982µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.546554697Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.548082111Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.528894ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.668702997Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.670920406Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.229469ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.79178184Z level=info msg="Executing migration" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.793252482Z level=info msg="Migration successfully executed" id="Remove unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.473582ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.896614559Z level=info msg="Executing migration" id="create sso_setting table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:27.898727714Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.116285ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.048412665Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.049923979Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.515674ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.141613774Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.142224386Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=614.261µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.179992778Z level=info msg="Executing migration" id="managed dashboard permissions annotation actions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.180757015Z level=info msg="Migration successfully executed" id="managed dashboard permissions annotation actions migration" duration=789.028µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.238669719Z level=info msg="Executing migration" id="create cloud_migration table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.240449501Z level=info msg="Migration successfully executed" id="create cloud_migration table v1" duration=1.782903ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.296957435Z level=info msg="Executing migration" id="create cloud_migration_run table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.299219175Z level=info msg="Migration successfully executed" id="create cloud_migration_run table v1" duration=2.26193ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.423037914Z level=info msg="Executing migration" id="add stack_id column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.438030173Z level=info msg="Migration successfully executed" id="add stack_id column" duration=14.991249ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.462454175Z level=info msg="Executing migration" id="add region_slug column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.475575208Z level=info msg="Migration successfully executed" id="add region_slug column" duration=13.121063ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.496511366Z level=info msg="Executing migration" id="add cluster_slug column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.511748804Z level=info msg="Migration successfully executed" id="add cluster_slug column" duration=15.232538ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.557464037Z level=info msg="Executing migration" id="add migration uid column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.564960342Z level=info msg="Migration successfully executed" id="add migration uid column" duration=7.497955ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.69017286Z level=info msg="Executing migration" id="Update uid column values for migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.690627156Z level=info msg="Migration successfully executed" id="Update uid column values for migration" duration=457.177µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.732799364Z level=info msg="Executing migration" id="Add unique index migration_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.735073594Z level=info msg="Migration successfully executed" id="Add unique index migration_uid" duration=2.27676ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.767091344Z level=info msg="Executing migration" id="add migration run uid column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.777714599Z level=info msg="Migration successfully executed" id="add migration run uid column" duration=10.623244ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.816216887Z level=info msg="Executing migration" id="Update uid column values for migration run" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.816498377Z level=info msg="Migration successfully executed" id="Update uid column values for migration run" duration=280.92µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.912967781Z level=info msg="Executing migration" id="Add unique index migration_run_uid" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.915415467Z level=info msg="Migration successfully executed" id="Add unique index migration_run_uid" duration=2.452036ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.960583011Z level=info msg="Executing migration" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:28.989879455Z level=info msg="Migration successfully executed" id="Rename table cloud_migration to cloud_migration_session_tmp_qwerty - v1" duration=29.270513ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.051029952Z level=info msg="Executing migration" id="create cloud_migration_session v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.053039213Z level=info msg="Migration successfully executed" id="create cloud_migration_session v2" duration=2.010351ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.087934445Z level=info msg="Executing migration" id="create index UQE_cloud_migration_session_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.090571888Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_session_uid - v2" duration=2.641564ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.120849626Z level=info msg="Executing migration" id="copy cloud_migration_session v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.121768248Z level=info msg="Migration successfully executed" id="copy cloud_migration_session v1 to v2" duration=921.072µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.171917278Z level=info msg="Executing migration" id="drop cloud_migration_session_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.173460652Z level=info msg="Migration successfully executed" id="drop cloud_migration_session_tmp_qwerty" duration=1.546564ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.199702638Z level=info msg="Executing migration" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.220746581Z level=info msg="Migration successfully executed" id="Rename table cloud_migration_run to cloud_migration_snapshot_tmp_qwerty - v1" duration=21.013872ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.284900124Z level=info msg="Executing migration" id="create cloud_migration_snapshot v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.286723409Z level=info msg="Migration successfully executed" id="create cloud_migration_snapshot v2" duration=1.826295ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.337838282Z level=info msg="Executing migration" id="create index UQE_cloud_migration_snapshot_uid - v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.339643216Z level=info msg="Migration successfully executed" id="create index UQE_cloud_migration_snapshot_uid - v2" duration=1.805344ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.439609083Z level=info msg="Executing migration" id="copy cloud_migration_snapshot v1 to v2" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.44008511Z level=info msg="Migration successfully executed" id="copy cloud_migration_snapshot v1 to v2" duration=478.427µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.471582902Z level=info msg="Executing migration" id="drop cloud_migration_snapshot_tmp_qwerty" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.473465638Z level=info msg="Migration successfully executed" id="drop cloud_migration_snapshot_tmp_qwerty" duration=1.882247ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.516441264Z level=info msg="Executing migration" id="add snapshot upload_url column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.527832076Z level=info msg="Migration successfully executed" id="add snapshot upload_url column" duration=11.391622ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.554478116Z level=info msg="Executing migration" id="add snapshot status column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.566614225Z level=info msg="Migration successfully executed" id="add snapshot status column" duration=12.135899ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.58887378Z level=info msg="Executing migration" id="add snapshot local_directory column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.599286078Z level=info msg="Migration successfully executed" id="add snapshot local_directory column" duration=10.412287ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.626372913Z level=info msg="Executing migration" id="add snapshot gms_snapshot_uid column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.637573058Z level=info msg="Migration successfully executed" id="add snapshot gms_snapshot_uid column" duration=11.200565ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.769534685Z level=info msg="Executing migration" id="add snapshot encryption_key column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.781044201Z level=info msg="Migration successfully executed" id="add snapshot encryption_key column" duration=11.511096ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.884393087Z level=info msg="Executing migration" id="add snapshot error_string column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.894607628Z level=info msg="Migration successfully executed" id="add snapshot error_string column" duration=10.216781ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.938943042Z level=info msg="Executing migration" id="create cloud_migration_resource table v1" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:29.940809758Z level=info msg="Migration successfully executed" id="create cloud_migration_resource table v1" duration=1.793273ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.017624518Z level=info msg="Executing migration" id="delete cloud_migration_snapshot.result column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.056453339Z level=info msg="Migration successfully executed" id="delete cloud_migration_snapshot.result column" duration=38.83042ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.101380794Z level=info msg="Executing migration" id="add cloud_migration_resource.name column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.113559794Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.name column" duration=12.178099ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.193430101Z level=info msg="Executing migration" id="add cloud_migration_resource.parent_name column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.20078294Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.parent_name column" duration=7.353879ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.249135876Z level=info msg="Executing migration" id="add cloud_migration_session.org_id column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.262156076Z level=info msg="Migration successfully executed" id="add cloud_migration_session.org_id column" duration=13.020499ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.370874612Z level=info msg="Executing migration" id="add cloud_migration_resource.error_code column" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.382665158Z level=info msg="Migration successfully executed" id="add cloud_migration_resource.error_code column" duration=11.789456ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.40767249Z level=info msg="Executing migration" id="increase resource_uid column length" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.407735522Z level=info msg="Migration successfully executed" id="increase resource_uid column length" duration=67.552µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.515095651Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.515137482Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=44.752µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.621866788Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.633401375Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.536947ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.805359642Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.812719252Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.36161ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.902123427Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.90279557Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=674.203µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.980046246Z level=info msg="Executing migration" id="managed folder permissions alerting silences actions migration" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:30.980585245Z level=info msg="Migration successfully executed" id="managed folder permissions alerting silences actions migration" duration=540.969µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.002659924Z level=info msg="Executing migration" id="add record column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.015208157Z level=info msg="Migration successfully executed" id="add record column to alert_rule table" duration=12.545293ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.052077268Z level=info msg="Executing migration" id="add record column to alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.064817867Z level=info msg="Migration successfully executed" id="add record column to alert_rule_version table" duration=12.744709ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.089063013Z level=info msg="Executing migration" id="add resolved_at column to alert_instance table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.101236542Z level=info msg="Migration successfully executed" id="add resolved_at column to alert_instance table" duration=12.173469ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.150064765Z level=info msg="Executing migration" id="add last_sent_at column to alert_instance table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.163147777Z level=info msg="Migration successfully executed" id="add last_sent_at column to alert_instance table" duration=13.084792ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.253815796Z level=info msg="Executing migration" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.254535301Z level=info msg="Migration successfully executed" id="Add scope to alert.notifications.receivers:read and alert.notifications.receivers.secrets:read" duration=724.605µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.261782717Z level=info msg="Executing migration" id="add metadata column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.286680266Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule table" duration=24.891788ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.303464338Z level=info msg="Executing migration" id="add metadata column to alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.317315707Z level=info msg="Migration successfully executed" id="add metadata column to alert_rule_version table" duration=13.860069ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.357750943Z level=info msg="Executing migration" id="delete orphaned service account permissions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.358324064Z level=info msg="Migration successfully executed" id="delete orphaned service account permissions" duration=581.651µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.375102166Z level=info msg="Executing migration" id="adding action set permissions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.376258186Z level=info msg="Migration successfully executed" id="adding action set permissions" duration=1.159011ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.399677843Z level=info msg="Executing migration" id="create user_external_session table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.401685154Z level=info msg="Migration successfully executed" id="create user_external_session table" duration=2.006741ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.419135869Z level=info msg="Executing migration" id="increase name_id column length to 1024" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.419182351Z level=info msg="Migration successfully executed" id="increase name_id column length to 1024" duration=50.182µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.478986061Z level=info msg="Executing migration" id="increase session_id column length to 1024" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.479030543Z level=info msg="Migration successfully executed" id="increase session_id column length to 1024" duration=48.782µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.50700614Z level=info msg="Executing migration" id="remove scope from alert.notifications.receivers:create" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.507693224Z level=info msg="Migration successfully executed" id="remove scope from alert.notifications.receivers:create" duration=691.414µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.54810867Z level=info msg="Executing migration" id="add created_by column to alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.561243844Z level=info msg="Migration successfully executed" id="add created_by column to alert_rule_version table" duration=13.136373ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.59289504Z level=info msg="Executing migration" id="add updated_by column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.605717353Z level=info msg="Migration successfully executed" id="add updated_by column to alert_rule table" duration=12.823503ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.663413399Z level=info msg="Executing migration" id="add alert_rule_state table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.665735751Z level=info msg="Migration successfully executed" id="add alert_rule_state table" duration=2.317812ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.733791032Z level=info msg="Executing migration" id="add index to alert_rule_state on org_id and rule_uid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.736035401Z level=info msg="Migration successfully executed" id="add index to alert_rule_state on org_id and rule_uid columns" duration=2.246299ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.790789883Z level=info msg="Executing migration" id="add guid column to alert_rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.80458039Z level=info msg="Migration successfully executed" id="add guid column to alert_rule table" duration=13.795227ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.868838947Z level=info msg="Executing migration" id="add rule_guid column to alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.882036613Z level=info msg="Migration successfully executed" id="add rule_guid column to alert_rule_version table" duration=13.200005ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.931021621Z level=info msg="Executing migration" id="cleanup alert_rule_version table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.931073403Z level=info msg="Rule version record limit is not set, fallback to 100" limit=0 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.93155654Z level=info msg="Cleaning up table `alert_rule_version`" batchSize=50 batches=0 keepVersions=100 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:31.931581011Z level=info msg="Migration successfully executed" id="cleanup alert_rule_version table" duration=562.32µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.012680332Z level=info msg="Executing migration" id="populate rule guid in alert rule table" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.014048831Z level=info msg="Migration successfully executed" id="populate rule guid in alert rule table" duration=1.370719ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.102710659Z level=info msg="Executing migration" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.104968069Z level=info msg="Migration successfully executed" id="drop index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.26324ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.179489398Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.181970956Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid, rule_guid and version columns" duration=2.484058ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.304727607Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_guid and version columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.307048389Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_guid and version columns" duration=2.323512ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.394128882Z level=info msg="Executing migration" id="add index in alert_rule table on guid columns" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.396438613Z level=info msg="Migration successfully executed" id="add index in alert_rule table on guid columns" duration=2.311861ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.47314743Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.487581229Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule" duration=14.45793ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.508445965Z level=info msg="Executing migration" id="add keep_firing_for column to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.521196965Z level=info msg="Migration successfully executed" id="add keep_firing_for column to alert_rule_version" duration=12.75206ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.545486062Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.561743956Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule" duration=16.252924ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.604608148Z level=info msg="Executing migration" id="add missing_series_evals_to_resolve column to alert_rule_version" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.612132474Z level=info msg="Migration successfully executed" id="add missing_series_evals_to_resolve column to alert_rule_version" duration=7.526896ms 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.673142817Z level=info msg="Executing migration" id="remove the datasources:drilldown action" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.673330193Z level=info msg="Removed 0 datasources:drilldown permissions" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.673341984Z level=info msg="Migration successfully executed" id="remove the datasources:drilldown action" duration=199.557µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.736747841Z level=info msg="Executing migration" id="remove title in folder unique index" 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.737671954Z level=info msg="Migration successfully executed" id="remove title in folder unique index" duration=924.212µs 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.835562628Z level=info msg="migrations completed" performed=654 skipped=0 duration=47.136971843s 15:03:04 grafana | logger=migrator t=2025-06-13T14:57:32.837095222Z level=info msg="Unlocking database" 15:03:04 grafana | logger=sqlstore t=2025-06-13T14:57:32.85661392Z level=info msg="Created default admin" user=admin 15:03:04 grafana | logger=sqlstore t=2025-06-13T14:57:32.856808317Z level=info msg="Created default organization" 15:03:04 grafana | logger=secrets t=2025-06-13T14:57:32.96601317Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 15:03:04 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:57:33.067582424Z level=info msg="Restored cache from database" duration=523.128µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.075985981Z level=info msg="Locking database" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.076002561Z level=info msg="Starting DB migrations" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.083553278Z level=info msg="Executing migration" id="create resource_migration_log table" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.084289254Z level=info msg="Migration successfully executed" id="create resource_migration_log table" duration=736.076µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.123071672Z level=info msg="Executing migration" id="Initialize resource tables" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.123109564Z level=info msg="Migration successfully executed" id="Initialize resource tables" duration=39.402µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.157338201Z level=info msg="Executing migration" id="drop table resource" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.157616631Z level=info msg="Migration successfully executed" id="drop table resource" duration=282.06µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.229833779Z level=info msg="Executing migration" id="create table resource" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.231856011Z level=info msg="Migration successfully executed" id="create table resource" duration=2.025772ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.251900318Z level=info msg="Executing migration" id="create table resource, index: 0" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.253621209Z level=info msg="Migration successfully executed" id="create table resource, index: 0" duration=1.71793ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.28852047Z level=info msg="Executing migration" id="drop table resource_history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.288757658Z level=info msg="Migration successfully executed" id="drop table resource_history" duration=236.378µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.326755039Z level=info msg="Executing migration" id="create table resource_history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.328601134Z level=info msg="Migration successfully executed" id="create table resource_history" duration=1.846105ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.375849791Z level=info msg="Executing migration" id="create table resource_history, index: 0" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.378178484Z level=info msg="Migration successfully executed" id="create table resource_history, index: 0" duration=2.331742ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.391273316Z level=info msg="Executing migration" id="create table resource_history, index: 1" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.393059619Z level=info msg="Migration successfully executed" id="create table resource_history, index: 1" duration=1.785833ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.407029302Z level=info msg="Executing migration" id="drop table resource_version" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.407180797Z level=info msg="Migration successfully executed" id="drop table resource_version" duration=149.826µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.430327554Z level=info msg="Executing migration" id="create table resource_version" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.431705542Z level=info msg="Migration successfully executed" id="create table resource_version" duration=1.377858ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.456669273Z level=info msg="Executing migration" id="create table resource_version, index: 0" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.458639983Z level=info msg="Migration successfully executed" id="create table resource_version, index: 0" duration=1.97195ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.474922737Z level=info msg="Executing migration" id="drop table resource_blob" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.475081463Z level=info msg="Migration successfully executed" id="drop table resource_blob" duration=160.276µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.483769839Z level=info msg="Executing migration" id="create table resource_blob" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.485172629Z level=info msg="Migration successfully executed" id="create table resource_blob" duration=1.40352ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.496050883Z level=info msg="Executing migration" id="create table resource_blob, index: 0" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.498714217Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 0" duration=2.667225ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.505817987Z level=info msg="Executing migration" id="create table resource_blob, index: 1" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.50729987Z level=info msg="Migration successfully executed" id="create table resource_blob, index: 1" duration=1.483083ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.519471459Z level=info msg="Executing migration" id="Add column previous_resource_version in resource_history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.532860662Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource_history" duration=13.393182ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.55096058Z level=info msg="Executing migration" id="Add column previous_resource_version in resource" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.566984206Z level=info msg="Migration successfully executed" id="Add column previous_resource_version in resource" duration=16.027315ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.628144704Z level=info msg="Executing migration" id="Add index to resource_history for polling" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.629630976Z level=info msg="Migration successfully executed" id="Add index to resource_history for polling" duration=1.486342ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.633970169Z level=info msg="Executing migration" id="Add index to resource for loading" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.635216213Z level=info msg="Migration successfully executed" id="Add index to resource for loading" duration=1.245564ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.644784551Z level=info msg="Executing migration" id="Add column folder in resource_history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.65724503Z level=info msg="Migration successfully executed" id="Add column folder in resource_history" duration=12.460849ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.675811676Z level=info msg="Executing migration" id="Add column folder in resource" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.688843285Z level=info msg="Migration successfully executed" id="Add column folder in resource" duration=13.03257ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.700069381Z level=info msg="Executing migration" id="Migrate DeletionMarkers to real Resource objects" 15:03:04 grafana | logger=deletion-marker-migrator t=2025-06-13T14:57:33.700222687Z level=info msg="finding any deletion markers" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.701020745Z level=info msg="Migration successfully executed" id="Migrate DeletionMarkers to real Resource objects" duration=950.654µs 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.710309473Z level=info msg="Executing migration" id="Add index to resource_history for get trash" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.712390496Z level=info msg="Migration successfully executed" id="Add index to resource_history for get trash" duration=2.080123ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.716817632Z level=info msg="Executing migration" id="Add generation to resource history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.727715656Z level=info msg="Migration successfully executed" id="Add generation to resource history" duration=10.901194ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.743537034Z level=info msg="Executing migration" id="Add generation index to resource history" 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.745663609Z level=info msg="Migration successfully executed" id="Add generation index to resource history" duration=2.125575ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.756697319Z level=info msg="migrations completed" performed=26 skipped=0 duration=673.190122ms 15:03:04 grafana | logger=resource-migrator t=2025-06-13T14:57:33.757418304Z level=info msg="Unlocking database" 15:03:04 grafana | t=2025-06-13T14:57:33.758293555Z level=info caller=logger.go:214 time=2025-06-13T14:57:33.758255654Z msg="Using channel notifier" logger=sql-resource-server 15:03:04 grafana | logger=plugin.store t=2025-06-13T14:57:33.772097192Z level=info msg="Loading plugins..." 15:03:04 grafana | logger=plugins.registration t=2025-06-13T14:57:33.824429138Z level=error msg="Could not register plugin" pluginId=table error="plugin table is already registered" 15:03:04 grafana | logger=plugins.initialization t=2025-06-13T14:57:33.824537532Z level=error msg="Could not initialize plugin" pluginId=table error="plugin table is already registered" 15:03:04 grafana | logger=plugin.store t=2025-06-13T14:57:33.824608285Z level=info msg="Plugins loaded" count=53 duration=52.512353ms 15:03:04 grafana | logger=query_data t=2025-06-13T14:57:33.840486575Z level=info msg="Query Service initialization" 15:03:04 grafana | logger=live.push_http t=2025-06-13T14:57:33.845812173Z level=info msg="Live Push Gateway initialization" 15:03:04 grafana | logger=ngalert.notifier.alertmanager org=1 t=2025-06-13T14:57:33.862856804Z level=info msg="Applying new configuration to Alertmanager" configHash=d2c56faca6af2a5772ff4253222f7386 15:03:04 grafana | logger=ngalert t=2025-06-13T14:57:33.872743223Z level=info msg="Using simple database alert instance store" 15:03:04 grafana | logger=ngalert.state.manager.persist t=2025-06-13T14:57:33.872830846Z level=info msg="Using sync state persister" 15:03:04 grafana | logger=infra.usagestats.collector t=2025-06-13T14:57:33.87888388Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 15:03:04 grafana | logger=ngalert.state.manager t=2025-06-13T14:57:33.879424419Z level=info msg="Warming state cache for startup" 15:03:04 grafana | logger=ngalert.state.manager t=2025-06-13T14:57:33.881418309Z level=info msg="State cache has been initialized" states=0 duration=1.99078ms 15:03:04 grafana | logger=grafanaStorageLogger t=2025-06-13T14:57:33.884413875Z level=info msg="Storage starting" 15:03:04 grafana | logger=http.server t=2025-06-13T14:57:33.885348978Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 15:03:04 grafana | logger=ngalert.multiorg.alertmanager t=2025-06-13T14:57:33.8862563Z level=info msg="Starting MultiOrg Alertmanager" 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:33.8987181Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 15:03:04 grafana | logger=ngalert.scheduler t=2025-06-13T14:57:33.899962744Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=3 15:03:04 grafana | logger=ticker t=2025-06-13T14:57:33.900120449Z level=info msg=starting first_tick=2025-06-13T14:57:40Z 15:03:04 grafana | logger=provisioning.datasources t=2025-06-13T14:57:33.946284278Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 15:03:04 grafana | logger=sqlstore.transactions t=2025-06-13T14:57:33.97353693Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 15:03:04 grafana | logger=grafana.update.checker t=2025-06-13T14:57:33.990985525Z level=info msg="Update check succeeded" duration=105.15129ms 15:03:04 grafana | logger=plugins.update.checker t=2025-06-13T14:57:33.996273832Z level=info msg="Update check succeeded" duration=111.294057ms 15:03:04 grafana | logger=provisioning.alerting t=2025-06-13T14:57:34.064312463Z level=info msg="starting to provision alerting" 15:03:04 grafana | logger=provisioning.alerting t=2025-06-13T14:57:34.064352124Z level=info msg="finished to provision alerting" 15:03:04 grafana | logger=provisioning.dashboard t=2025-06-13T14:57:34.067129432Z level=info msg="starting to provision dashboards" 15:03:04 grafana | logger=sqlstore.transactions t=2025-06-13T14:57:34.079474168Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 15:03:04 grafana | logger=plugin.angulardetectorsprovider.dynamic t=2025-06-13T14:57:34.17589226Z level=info msg="Patterns update finished" duration=210.926053ms 15:03:04 grafana | logger=sqlstore.transactions t=2025-06-13T14:57:34.215136694Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 15:03:04 grafana | logger=plugin.installer t=2025-06-13T14:57:34.259152988Z level=info msg="Installing plugin" pluginId=grafana-pyroscope-app version= 15:03:04 grafana | logger=installer.fs t=2025-06-13T14:57:34.357642893Z level=info msg="Downloaded and extracted grafana-pyroscope-app v1.4.1 zip successfully to /var/lib/grafana/plugins/grafana-pyroscope-app" 15:03:04 grafana | logger=plugins.registration t=2025-06-13T14:57:34.381110561Z level=info msg="Plugin registered" pluginId=grafana-pyroscope-app 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.381136012Z level=info msg="Plugin successfully installed" pluginId=grafana-pyroscope-app version= duration=482.313369ms 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.381289197Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 15:03:04 grafana | logger=plugin.installer t=2025-06-13T14:57:34.594241541Z level=info msg="Installing plugin" pluginId=grafana-exploretraces-app version= 15:03:04 grafana | logger=installer.fs t=2025-06-13T14:57:34.654512698Z level=info msg="Downloaded and extracted grafana-exploretraces-app v1.0.0 zip successfully to /var/lib/grafana/plugins/grafana-exploretraces-app" 15:03:04 grafana | logger=plugins.registration t=2025-06-13T14:57:34.671399424Z level=info msg="Plugin registered" pluginId=grafana-exploretraces-app 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.671436165Z level=info msg="Plugin successfully installed" pluginId=grafana-exploretraces-app version= duration=290.130927ms 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.671469756Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.679368005Z level=info msg="Adding GroupVersion iam.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.681608064Z level=info msg="Adding GroupVersion notifications.alerting.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.682406922Z level=info msg="Adding GroupVersion userstorage.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.68320054Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.70302764Z level=info msg="Adding GroupVersion dashboard.grafana.app v1beta1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.70445152Z level=info msg="Adding GroupVersion dashboard.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.705817568Z level=info msg="Adding GroupVersion dashboard.grafana.app v2alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.707485157Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 15:03:04 grafana | logger=grafana-apiserver t=2025-06-13T14:57:34.708690189Z level=info msg="Adding GroupVersion folder.grafana.app v1beta1 to ResourceManager" 15:03:04 grafana | logger=app-registry t=2025-06-13T14:57:34.776912577Z level=info msg="app registry initialized" 15:03:04 grafana | logger=plugin.installer t=2025-06-13T14:57:34.865543634Z level=info msg="Installing plugin" pluginId=grafana-metricsdrilldown-app version= 15:03:04 grafana | logger=installer.fs t=2025-06-13T14:57:34.95301279Z level=info msg="Downloaded and extracted grafana-metricsdrilldown-app v1.0.1 zip successfully to /var/lib/grafana/plugins/grafana-metricsdrilldown-app" 15:03:04 grafana | logger=plugins.registration t=2025-06-13T14:57:34.979241246Z level=info msg="Plugin registered" pluginId=grafana-metricsdrilldown-app 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.979268377Z level=info msg="Plugin successfully installed" pluginId=grafana-metricsdrilldown-app version= duration=307.786731ms 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:34.979452673Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 15:03:04 grafana | logger=plugin.installer t=2025-06-13T14:57:35.220697542Z level=info msg="Installing plugin" pluginId=grafana-lokiexplore-app version= 15:03:04 grafana | logger=installer.fs t=2025-06-13T14:57:35.410983898Z level=info msg="Downloaded and extracted grafana-lokiexplore-app v1.0.17 zip successfully to /var/lib/grafana/plugins/grafana-lokiexplore-app" 15:03:04 grafana | logger=plugins.registration t=2025-06-13T14:57:35.467962126Z level=info msg="Plugin registered" pluginId=grafana-lokiexplore-app 15:03:04 grafana | logger=plugin.backgroundinstaller t=2025-06-13T14:57:35.467992677Z level=info msg="Plugin successfully installed" pluginId=grafana-lokiexplore-app version= duration=488.526963ms 15:03:04 grafana | logger=provisioning.dashboard t=2025-06-13T14:57:35.569392107Z level=info msg="finished to provision dashboards" 15:03:04 grafana | logger=infra.usagestats t=2025-06-13T14:58:14.896394732Z level=info msg="Usage stats are ready to report" 15:03:04 kafka | ===> User 15:03:04 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 15:03:04 kafka | ===> Configuring ... 15:03:04 kafka | Running in Zookeeper mode... 15:03:04 kafka | ===> Running preflight checks ... 15:03:04 kafka | ===> Check if /var/lib/kafka/data is writable ... 15:03:04 kafka | ===> Check if Zookeeper is healthy ... 15:03:04 kafka | [2025-06-13 14:56:53,538] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-storage-api-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/kafka-clients-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-7.4.9-ccs.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-raft-7.4.9-ccs.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.4.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.2-1.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.4.9-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.4.9.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/kafka-metadata-7.4.9-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/commons-io-2.16.0.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/zookeeper-3.8.4.jar:/usr/share/java/cp-base-new/kafka_2.13-7.4.9-ccs.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/utility-belt-7.4.9-53.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.4.9.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,539] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,540] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,544] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@19dc67c2 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:53,547] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 15:03:04 kafka | [2025-06-13 14:56:53,552] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 15:03:04 kafka | [2025-06-13 14:56:53,561] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:53,579] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:53,580] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:53,593] INFO Socket connection established, initiating session, client: /172.17.0.7:53628, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:53,669] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000030c100000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:53,811] INFO Session: 0x10000030c100000 closed (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | Using log4j config /etc/kafka/log4j.properties 15:03:04 kafka | ===> Launching ... 15:03:04 kafka | ===> Launching kafka ... 15:03:04 kafka | [2025-06-13 14:56:54,841] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 15:03:04 kafka | [2025-06-13 14:56:55,179] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 15:03:04 kafka | [2025-06-13 14:56:55,270] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 15:03:04 kafka | [2025-06-13 14:56:55,271] INFO starting (kafka.server.KafkaServer) 15:03:04 kafka | [2025-06-13 14:56:55,272] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 15:03:04 kafka | [2025-06-13 14:56:55,285] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:host.name=kafka (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.version=11.0.26 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-storage-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/netty-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-shell-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/kafka-clients-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-storage-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.0.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.2-1.jar:/usr/bin/../share/java/kafka/connect-runtime-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.115.Final.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-api-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.115.Final.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.115.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.1.2.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.115.Final.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/trogdor-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/commons-io-2.16.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-transforms-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/connect-mirror-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-tools-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.115.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.4.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.4.9-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.115.Final.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,289] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,291] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@52851b44 (org.apache.zookeeper.ZooKeeper) 15:03:04 kafka | [2025-06-13 14:56:55,295] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 15:03:04 kafka | [2025-06-13 14:56:55,301] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:55,306] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 15:03:04 kafka | [2025-06-13 14:56:55,309] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:55,317] INFO Socket connection established, initiating session, client: /172.17.0.7:53630, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:55,375] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000030c100001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 15:03:04 kafka | [2025-06-13 14:56:55,384] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 15:03:04 kafka | [2025-06-13 14:56:57,023] INFO Cluster ID = joYG5LnGQiS1GzzhjdPKfA (kafka.server.KafkaServer) 15:03:04 kafka | [2025-06-13 14:56:57,028] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 15:03:04 kafka | [2025-06-13 14:56:57,078] INFO KafkaConfig values: 15:03:04 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 15:03:04 kafka | alter.config.policy.class.name = null 15:03:04 kafka | alter.log.dirs.replication.quota.window.num = 11 15:03:04 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 15:03:04 kafka | authorizer.class.name = 15:03:04 kafka | auto.create.topics.enable = true 15:03:04 kafka | auto.include.jmx.reporter = true 15:03:04 kafka | auto.leader.rebalance.enable = true 15:03:04 kafka | background.threads = 10 15:03:04 kafka | broker.heartbeat.interval.ms = 2000 15:03:04 kafka | broker.id = 1 15:03:04 kafka | broker.id.generation.enable = true 15:03:04 kafka | broker.rack = null 15:03:04 kafka | broker.session.timeout.ms = 9000 15:03:04 kafka | client.quota.callback.class = null 15:03:04 kafka | compression.type = producer 15:03:04 kafka | connection.failed.authentication.delay.ms = 100 15:03:04 kafka | connections.max.idle.ms = 600000 15:03:04 kafka | connections.max.reauth.ms = 0 15:03:04 kafka | control.plane.listener.name = null 15:03:04 kafka | controlled.shutdown.enable = true 15:03:04 kafka | controlled.shutdown.max.retries = 3 15:03:04 kafka | controlled.shutdown.retry.backoff.ms = 5000 15:03:04 kafka | controller.listener.names = null 15:03:04 kafka | controller.quorum.append.linger.ms = 25 15:03:04 kafka | controller.quorum.election.backoff.max.ms = 1000 15:03:04 kafka | controller.quorum.election.timeout.ms = 1000 15:03:04 kafka | controller.quorum.fetch.timeout.ms = 2000 15:03:04 kafka | controller.quorum.request.timeout.ms = 2000 15:03:04 kafka | controller.quorum.retry.backoff.ms = 20 15:03:04 kafka | controller.quorum.voters = [] 15:03:04 kafka | controller.quota.window.num = 11 15:03:04 kafka | controller.quota.window.size.seconds = 1 15:03:04 kafka | controller.socket.timeout.ms = 30000 15:03:04 kafka | create.topic.policy.class.name = null 15:03:04 kafka | default.replication.factor = 1 15:03:04 kafka | delegation.token.expiry.check.interval.ms = 3600000 15:03:04 kafka | delegation.token.expiry.time.ms = 86400000 15:03:04 kafka | delegation.token.master.key = null 15:03:04 kafka | delegation.token.max.lifetime.ms = 604800000 15:03:04 kafka | delegation.token.secret.key = null 15:03:04 kafka | delete.records.purgatory.purge.interval.requests = 1 15:03:04 kafka | delete.topic.enable = true 15:03:04 kafka | early.start.listeners = null 15:03:04 kafka | fetch.max.bytes = 57671680 15:03:04 kafka | fetch.purgatory.purge.interval.requests = 1000 15:03:04 kafka | group.initial.rebalance.delay.ms = 3000 15:03:04 kafka | group.max.session.timeout.ms = 1800000 15:03:04 kafka | group.max.size = 2147483647 15:03:04 kafka | group.min.session.timeout.ms = 6000 15:03:04 kafka | initial.broker.registration.timeout.ms = 60000 15:03:04 kafka | inter.broker.listener.name = PLAINTEXT 15:03:04 kafka | inter.broker.protocol.version = 3.4-IV0 15:03:04 kafka | kafka.metrics.polling.interval.secs = 10 15:03:04 kafka | kafka.metrics.reporters = [] 15:03:04 kafka | leader.imbalance.check.interval.seconds = 300 15:03:04 kafka | leader.imbalance.per.broker.percentage = 10 15:03:04 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 15:03:04 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 15:03:04 kafka | log.cleaner.backoff.ms = 15000 15:03:04 kafka | log.cleaner.dedupe.buffer.size = 134217728 15:03:04 kafka | log.cleaner.delete.retention.ms = 86400000 15:03:04 kafka | log.cleaner.enable = true 15:03:04 kafka | log.cleaner.io.buffer.load.factor = 0.9 15:03:04 kafka | log.cleaner.io.buffer.size = 524288 15:03:04 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 15:03:04 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 15:03:04 kafka | log.cleaner.min.cleanable.ratio = 0.5 15:03:04 kafka | log.cleaner.min.compaction.lag.ms = 0 15:03:04 kafka | log.cleaner.threads = 1 15:03:04 kafka | log.cleanup.policy = [delete] 15:03:04 kafka | log.dir = /tmp/kafka-logs 15:03:04 kafka | log.dirs = /var/lib/kafka/data 15:03:04 kafka | log.flush.interval.messages = 9223372036854775807 15:03:04 kafka | log.flush.interval.ms = null 15:03:04 kafka | log.flush.offset.checkpoint.interval.ms = 60000 15:03:04 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 15:03:04 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 15:03:04 kafka | log.index.interval.bytes = 4096 15:03:04 kafka | log.index.size.max.bytes = 10485760 15:03:04 kafka | log.message.downconversion.enable = true 15:03:04 kafka | log.message.format.version = 3.0-IV1 15:03:04 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 15:03:04 kafka | log.message.timestamp.type = CreateTime 15:03:04 kafka | log.preallocate = false 15:03:04 kafka | log.retention.bytes = -1 15:03:04 kafka | log.retention.check.interval.ms = 300000 15:03:04 kafka | log.retention.hours = 168 15:03:04 kafka | log.retention.minutes = null 15:03:04 kafka | log.retention.ms = null 15:03:04 kafka | log.roll.hours = 168 15:03:04 kafka | log.roll.jitter.hours = 0 15:03:04 kafka | log.roll.jitter.ms = null 15:03:04 kafka | log.roll.ms = null 15:03:04 kafka | log.segment.bytes = 1073741824 15:03:04 kafka | log.segment.delete.delay.ms = 60000 15:03:04 kafka | max.connection.creation.rate = 2147483647 15:03:04 kafka | max.connections = 2147483647 15:03:04 kafka | max.connections.per.ip = 2147483647 15:03:04 kafka | max.connections.per.ip.overrides = 15:03:04 kafka | max.incremental.fetch.session.cache.slots = 1000 15:03:04 kafka | message.max.bytes = 1048588 15:03:04 kafka | metadata.log.dir = null 15:03:04 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 15:03:04 kafka | metadata.log.max.snapshot.interval.ms = 3600000 15:03:04 kafka | metadata.log.segment.bytes = 1073741824 15:03:04 kafka | metadata.log.segment.min.bytes = 8388608 15:03:04 kafka | metadata.log.segment.ms = 604800000 15:03:04 kafka | metadata.max.idle.interval.ms = 500 15:03:04 kafka | metadata.max.retention.bytes = 104857600 15:03:04 kafka | metadata.max.retention.ms = 604800000 15:03:04 kafka | metric.reporters = [] 15:03:04 kafka | metrics.num.samples = 2 15:03:04 kafka | metrics.recording.level = INFO 15:03:04 kafka | metrics.sample.window.ms = 30000 15:03:04 kafka | min.insync.replicas = 1 15:03:04 kafka | node.id = 1 15:03:04 kafka | num.io.threads = 8 15:03:04 kafka | num.network.threads = 3 15:03:04 kafka | num.partitions = 1 15:03:04 kafka | num.recovery.threads.per.data.dir = 1 15:03:04 kafka | num.replica.alter.log.dirs.threads = null 15:03:04 kafka | num.replica.fetchers = 1 15:03:04 kafka | offset.metadata.max.bytes = 4096 15:03:04 kafka | offsets.commit.required.acks = -1 15:03:04 kafka | offsets.commit.timeout.ms = 5000 15:03:04 kafka | offsets.load.buffer.size = 5242880 15:03:04 kafka | offsets.retention.check.interval.ms = 600000 15:03:04 kafka | offsets.retention.minutes = 10080 15:03:04 kafka | offsets.topic.compression.codec = 0 15:03:04 kafka | offsets.topic.num.partitions = 50 15:03:04 kafka | offsets.topic.replication.factor = 1 15:03:04 kafka | offsets.topic.segment.bytes = 104857600 15:03:04 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 15:03:04 kafka | password.encoder.iterations = 4096 15:03:04 kafka | password.encoder.key.length = 128 15:03:04 kafka | password.encoder.keyfactory.algorithm = null 15:03:04 kafka | password.encoder.old.secret = null 15:03:04 kafka | password.encoder.secret = null 15:03:04 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 15:03:04 kafka | process.roles = [] 15:03:04 kafka | producer.id.expiration.check.interval.ms = 600000 15:03:04 kafka | producer.id.expiration.ms = 86400000 15:03:04 kafka | producer.purgatory.purge.interval.requests = 1000 15:03:04 kafka | queued.max.request.bytes = -1 15:03:04 kafka | queued.max.requests = 500 15:03:04 kafka | quota.window.num = 11 15:03:04 kafka | quota.window.size.seconds = 1 15:03:04 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 15:03:04 kafka | remote.log.manager.task.interval.ms = 30000 15:03:04 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 15:03:04 kafka | remote.log.manager.task.retry.backoff.ms = 500 15:03:04 kafka | remote.log.manager.task.retry.jitter = 0.2 15:03:04 kafka | remote.log.manager.thread.pool.size = 10 15:03:04 kafka | remote.log.metadata.manager.class.name = null 15:03:04 kafka | remote.log.metadata.manager.class.path = null 15:03:04 kafka | remote.log.metadata.manager.impl.prefix = null 15:03:04 kafka | remote.log.metadata.manager.listener.name = null 15:03:04 kafka | remote.log.reader.max.pending.tasks = 100 15:03:04 kafka | remote.log.reader.threads = 10 15:03:04 kafka | remote.log.storage.manager.class.name = null 15:03:04 kafka | remote.log.storage.manager.class.path = null 15:03:04 kafka | remote.log.storage.manager.impl.prefix = null 15:03:04 kafka | remote.log.storage.system.enable = false 15:03:04 kafka | replica.fetch.backoff.ms = 1000 15:03:04 kafka | replica.fetch.max.bytes = 1048576 15:03:04 kafka | replica.fetch.min.bytes = 1 15:03:04 kafka | replica.fetch.response.max.bytes = 10485760 15:03:04 kafka | replica.fetch.wait.max.ms = 500 15:03:04 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 15:03:04 kafka | replica.lag.time.max.ms = 30000 15:03:04 kafka | replica.selector.class = null 15:03:04 kafka | replica.socket.receive.buffer.bytes = 65536 15:03:04 kafka | replica.socket.timeout.ms = 30000 15:03:04 kafka | replication.quota.window.num = 11 15:03:04 kafka | replication.quota.window.size.seconds = 1 15:03:04 kafka | request.timeout.ms = 30000 15:03:04 kafka | reserved.broker.max.id = 1000 15:03:04 kafka | sasl.client.callback.handler.class = null 15:03:04 kafka | sasl.enabled.mechanisms = [GSSAPI] 15:03:04 kafka | sasl.jaas.config = null 15:03:04 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:04 kafka | sasl.kerberos.min.time.before.relogin = 60000 15:03:04 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 15:03:04 kafka | sasl.kerberos.service.name = null 15:03:04 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:04 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:04 kafka | sasl.login.callback.handler.class = null 15:03:04 kafka | sasl.login.class = null 15:03:04 kafka | sasl.login.connect.timeout.ms = null 15:03:04 kafka | sasl.login.read.timeout.ms = null 15:03:04 kafka | sasl.login.refresh.buffer.seconds = 300 15:03:04 kafka | sasl.login.refresh.min.period.seconds = 60 15:03:04 kafka | sasl.login.refresh.window.factor = 0.8 15:03:04 kafka | sasl.login.refresh.window.jitter = 0.05 15:03:04 kafka | sasl.login.retry.backoff.max.ms = 10000 15:03:04 kafka | sasl.login.retry.backoff.ms = 100 15:03:04 kafka | sasl.mechanism.controller.protocol = GSSAPI 15:03:04 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 15:03:04 kafka | sasl.oauthbearer.clock.skew.seconds = 30 15:03:04 kafka | sasl.oauthbearer.expected.audience = null 15:03:04 kafka | sasl.oauthbearer.expected.issuer = null 15:03:04 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:04 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:04 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:04 kafka | sasl.oauthbearer.jwks.endpoint.url = null 15:03:04 kafka | sasl.oauthbearer.scope.claim.name = scope 15:03:04 kafka | sasl.oauthbearer.sub.claim.name = sub 15:03:04 kafka | sasl.oauthbearer.token.endpoint.url = null 15:03:04 kafka | sasl.server.callback.handler.class = null 15:03:04 kafka | sasl.server.max.receive.size = 524288 15:03:04 kafka | security.inter.broker.protocol = PLAINTEXT 15:03:04 kafka | security.providers = null 15:03:04 kafka | socket.connection.setup.timeout.max.ms = 30000 15:03:04 kafka | socket.connection.setup.timeout.ms = 10000 15:03:04 kafka | socket.listen.backlog.size = 50 15:03:04 kafka | socket.receive.buffer.bytes = 102400 15:03:04 kafka | socket.request.max.bytes = 104857600 15:03:04 kafka | socket.send.buffer.bytes = 102400 15:03:04 kafka | ssl.cipher.suites = [] 15:03:04 kafka | ssl.client.auth = none 15:03:04 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:04 kafka | ssl.endpoint.identification.algorithm = https 15:03:04 kafka | ssl.engine.factory.class = null 15:03:04 kafka | ssl.key.password = null 15:03:04 kafka | ssl.keymanager.algorithm = SunX509 15:03:04 kafka | ssl.keystore.certificate.chain = null 15:03:04 kafka | ssl.keystore.key = null 15:03:04 kafka | ssl.keystore.location = null 15:03:04 kafka | ssl.keystore.password = null 15:03:04 kafka | ssl.keystore.type = JKS 15:03:04 kafka | ssl.principal.mapping.rules = DEFAULT 15:03:04 kafka | ssl.protocol = TLSv1.3 15:03:04 kafka | ssl.provider = null 15:03:04 kafka | ssl.secure.random.implementation = null 15:03:04 kafka | ssl.trustmanager.algorithm = PKIX 15:03:04 kafka | ssl.truststore.certificates = null 15:03:04 kafka | ssl.truststore.location = null 15:03:04 kafka | ssl.truststore.password = null 15:03:04 kafka | ssl.truststore.type = JKS 15:03:04 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 15:03:04 kafka | transaction.max.timeout.ms = 900000 15:03:04 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 15:03:04 kafka | transaction.state.log.load.buffer.size = 5242880 15:03:04 kafka | transaction.state.log.min.isr = 2 15:03:04 kafka | transaction.state.log.num.partitions = 50 15:03:04 kafka | transaction.state.log.replication.factor = 3 15:03:04 kafka | transaction.state.log.segment.bytes = 104857600 15:03:04 kafka | transactional.id.expiration.ms = 604800000 15:03:04 kafka | unclean.leader.election.enable = false 15:03:04 kafka | zookeeper.clientCnxnSocket = null 15:03:04 kafka | zookeeper.connect = zookeeper:2181 15:03:04 kafka | zookeeper.connection.timeout.ms = null 15:03:04 kafka | zookeeper.max.in.flight.requests = 10 15:03:04 kafka | zookeeper.metadata.migration.enable = false 15:03:04 kafka | zookeeper.session.timeout.ms = 18000 15:03:04 kafka | zookeeper.set.acl = false 15:03:04 kafka | zookeeper.ssl.cipher.suites = null 15:03:04 kafka | zookeeper.ssl.client.enable = false 15:03:04 kafka | zookeeper.ssl.crl.enable = false 15:03:04 kafka | zookeeper.ssl.enabled.protocols = null 15:03:04 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 15:03:04 kafka | zookeeper.ssl.keystore.location = null 15:03:04 kafka | zookeeper.ssl.keystore.password = null 15:03:04 kafka | zookeeper.ssl.keystore.type = null 15:03:04 kafka | zookeeper.ssl.ocsp.enable = false 15:03:04 kafka | zookeeper.ssl.protocol = TLSv1.2 15:03:04 kafka | zookeeper.ssl.truststore.location = null 15:03:04 kafka | zookeeper.ssl.truststore.password = null 15:03:04 kafka | zookeeper.ssl.truststore.type = null 15:03:04 kafka | (kafka.server.KafkaConfig) 15:03:04 kafka | [2025-06-13 14:56:57,130] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:03:04 kafka | [2025-06-13 14:56:57,132] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:03:04 kafka | [2025-06-13 14:56:57,130] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:03:04 kafka | [2025-06-13 14:56:57,135] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 15:03:04 kafka | [2025-06-13 14:56:57,172] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:56:57,176] INFO Attempting recovery for all logs in /var/lib/kafka/data since no clean shutdown file was found (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:56:57,208] INFO Loaded 0 logs in 35ms. (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:56:57,208] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:56:57,211] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:56:57,223] INFO Starting the log cleaner (kafka.log.LogCleaner) 15:03:04 kafka | [2025-06-13 14:56:57,268] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) 15:03:04 kafka | [2025-06-13 14:56:57,283] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 15:03:04 kafka | [2025-06-13 14:56:57,298] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 15:03:04 kafka | [2025-06-13 14:56:57,338] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) 15:03:04 kafka | [2025-06-13 14:56:57,672] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 15:03:04 kafka | [2025-06-13 14:56:57,676] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 15:03:04 kafka | [2025-06-13 14:56:57,698] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 15:03:04 kafka | [2025-06-13 14:56:57,699] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 15:03:04 kafka | [2025-06-13 14:56:57,699] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 15:03:04 kafka | [2025-06-13 14:56:57,703] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 15:03:04 kafka | [2025-06-13 14:56:57,708] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread) 15:03:04 kafka | [2025-06-13 14:56:57,728] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,730] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,732] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,734] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,747] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 15:03:04 kafka | [2025-06-13 14:56:57,771] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 15:03:04 kafka | [2025-06-13 14:56:57,805] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1749826617786,1749826617786,1,0,0,72057607125204993,258,0,27 15:03:04 kafka | (kafka.zk.KafkaZkClient) 15:03:04 kafka | [2025-06-13 14:56:57,807] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 15:03:04 kafka | [2025-06-13 14:56:57,877] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 15:03:04 kafka | [2025-06-13 14:56:57,887] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,898] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,898] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,903] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 15:03:04 kafka | [2025-06-13 14:56:57,918] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:56:57,918] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:57,923] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:56:57,924] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:57,930] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 15:03:04 kafka | [2025-06-13 14:56:57,939] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 15:03:04 kafka | [2025-06-13 14:56:57,945] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 15:03:04 kafka | [2025-06-13 14:56:57,946] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 15:03:04 kafka | [2025-06-13 14:56:57,965] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:57,965] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 15:03:04 kafka | [2025-06-13 14:56:57,977] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:57,981] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:57,982] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 15:03:04 kafka | [2025-06-13 14:56:57,990] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,008] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 15:03:04 kafka | [2025-06-13 14:56:58,013] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,021] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 15:03:04 kafka | [2025-06-13 14:56:58,022] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,028] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 15:03:04 kafka | [2025-06-13 14:56:58,033] INFO Kafka version: 7.4.9-ccs (org.apache.kafka.common.utils.AppInfoParser) 15:03:04 kafka | [2025-06-13 14:56:58,033] INFO Kafka commitId: 07d888cfc0d14765fe5557324f1fdb4ada6698a5 (org.apache.kafka.common.utils.AppInfoParser) 15:03:04 kafka | [2025-06-13 14:56:58,033] INFO Kafka startTimeMs: 1749826618027 (org.apache.kafka.common.utils.AppInfoParser) 15:03:04 kafka | [2025-06-13 14:56:58,034] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 15:03:04 kafka | [2025-06-13 14:56:58,039] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 15:03:04 kafka | [2025-06-13 14:56:58,040] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,040] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,040] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,040] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,043] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,043] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,044] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,044] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 15:03:04 kafka | [2025-06-13 14:56:58,045] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,048] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:56:58,053] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,054] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,057] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,057] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,058] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,059] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,061] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 15:03:04 kafka | [2025-06-13 14:56:58,062] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,075] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,076] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,077] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,077] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,078] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 15:03:04 kafka | [2025-06-13 14:56:58,080] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,151] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:56:58,153] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:56:58,172] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 15:03:04 kafka | [2025-06-13 14:56:58,212] INFO [BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 15:03:04 kafka | [2025-06-13 14:57:03,157] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:03,158] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:48,673] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:48,684] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:03:04 kafka | [2025-06-13 14:57:48,699] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:48,701] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:03:04 kafka | [2025-06-13 14:57:48,824] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(g0oOWLlyQZGuYM2AJknoXg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(udPdfu8ZRYWmOXlVLJGqXg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:48,826] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:57:48,828] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,829] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,831] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,837] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,838] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,839] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:48,842] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,210] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,210] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,210] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,210] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,211] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,212] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,213] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,214] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,215] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,218] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,218] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,218] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,218] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,219] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,220] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,221] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,222] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,223] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,224] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,224] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,227] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,229] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,230] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,231] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,232] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,233] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,234] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,236] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,237] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,238] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,239] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,240] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,241] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,242] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,243] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,243] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,243] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,280] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,281] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,282] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,283] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,284] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,285] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,285] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,285] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,285] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,286] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 15:03:04 kafka | [2025-06-13 14:57:49,287] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,337] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,349] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,351] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,351] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,353] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,431] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,432] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,432] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,432] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,433] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,494] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,495] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,495] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,495] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,496] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,607] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,608] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,609] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,609] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,609] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,689] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,691] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,691] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,691] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,692] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,940] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,942] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,942] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,942] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,942] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:49,996] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:49,997] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:49,997] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,997] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:49,998] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,115] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,117] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,117] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,117] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,117] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,254] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,256] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,256] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,256] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,256] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,326] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,328] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,328] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,328] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,328] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,444] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,445] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,445] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,445] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,445] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,609] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,611] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,611] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,611] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,611] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,738] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,740] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,740] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,740] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,740] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:50,950] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:50,952] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:50,952] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,952] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:50,952] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,062] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,064] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,064] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,064] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,064] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,126] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,127] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,127] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,128] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,128] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,155] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,156] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,156] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,156] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,157] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,220] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,221] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,222] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,222] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,222] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,271] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,273] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,273] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,273] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,274] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,333] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,334] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,335] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,335] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,335] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,417] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,422] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,422] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,422] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,422] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,470] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,472] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,472] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,472] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,472] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,508] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,510] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,510] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,510] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,510] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,545] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,546] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,546] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,546] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,546] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,591] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,592] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,592] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,592] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,593] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,636] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,637] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,638] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,638] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,638] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,702] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,703] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,703] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,703] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,704] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,757] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,759] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,759] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,759] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,759] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,843] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,845] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,845] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,845] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,845] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,897] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,898] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,898] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,898] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,899] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:51,979] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:51,981] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:51,981] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,981] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:51,982] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,062] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,064] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,064] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,064] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,065] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,121] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,122] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,123] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,123] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,123] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,239] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,240] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,241] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,241] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,241] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,314] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,315] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,316] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,316] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,316] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(g0oOWLlyQZGuYM2AJknoXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,352] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,354] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,354] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,354] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,354] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,422] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,423] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,424] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,424] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,424] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,489] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,490] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,490] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,490] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,491] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,539] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,541] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,541] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,541] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,541] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,601] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,602] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,602] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,602] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,603] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,680] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,681] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,682] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,682] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,682] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,729] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,730] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,730] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,730] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,730] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,756] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,757] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,757] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,757] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,758] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,825] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,827] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,827] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,827] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,827] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,889] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,890] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,890] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,891] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,891] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:52,974] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:52,975] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:52,975] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,976] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:52,976] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,054] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:53,055] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:53,055] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,055] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,056] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,126] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:53,128] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:53,128] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,128] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,128] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,173] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:53,175] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:53,175] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,175] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,176] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,255] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:53,256] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:53,256] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,256] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,256] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,311] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:57:53,312] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:57:53,312] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,312] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:57:53,313] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(udPdfu8ZRYWmOXlVLJGqXg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,341] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,342] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,343] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,344] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,345] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,345] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,345] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,352] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,354] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,356] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,356] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,356] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,357] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,357] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,357] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,358] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,359] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,359] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,359] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,359] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,360] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,360] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,360] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,361] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,362] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,362] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,362] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,362] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,363] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,364] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,365] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,366] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,367] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,368] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,369] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,370] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,371] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,372] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,373] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,374] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,374] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,374] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,374] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,374] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,375] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,375] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 15:03:04 kafka | [2025-06-13 14:57:53,375] INFO [Broker id=1] Finished LeaderAndIsr request in 4139ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,379] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=udPdfu8ZRYWmOXlVLJGqXg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=g0oOWLlyQZGuYM2AJknoXg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,385] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,385] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,385] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,386] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,387] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,388] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,389] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,390] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,391] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,392] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,392] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,392] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,392] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,392] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,393] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,394] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:57:53,464] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group fddb7771-28a4-4343-b8d7-b4045b0e6dfb in Empty state. Created a new member id consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:53,481] INFO [GroupCoordinator 1]: Preparing to rebalance group fddb7771-28a4-4343-b8d7-b4045b0e6dfb in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57 with group instance id None; client reason: need to re-join with the given member-id: consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:54,358] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:54,362] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2 with group instance id None; client reason: need to re-join with the given member-id: consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:56,493] INFO [GroupCoordinator 1]: Stabilized group fddb7771-28a4-4343-b8d7-b4045b0e6dfb generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:56,519] INFO [GroupCoordinator 1]: Assignment received from leader consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57 for group fddb7771-28a4-4343-b8d7-b4045b0e6dfb for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:57,363] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:57:57,369] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:58:33,763] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group opa-pdp in Empty state. Created a new member id rdkafka-0dac1918-2565-49c3-b9bf-184f1a4f9602 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:58:33,765] INFO [GroupCoordinator 1]: Preparing to rebalance group opa-pdp in state PreparingRebalance with old generation 0 (__consumer_offsets-25) (reason: Adding new member rdkafka-0dac1918-2565-49c3-b9bf-184f1a4f9602 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:58:36,767] INFO [GroupCoordinator 1]: Stabilized group opa-pdp generation 1 (__consumer_offsets-25) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:58:36,771] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-0dac1918-2565-49c3-b9bf-184f1a4f9602 for group opa-pdp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 14:59:44,530] INFO Creating topic policy-notification with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 15:03:04 kafka | [2025-06-13 14:59:44,549] INFO [Controller id=1] New topics: [Set(policy-notification)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-notification,Some(eD99wZpyRpy5GlLcA84t2Q),Map(policy-notification-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:59:44,549] INFO [Controller id=1] New partition creation callback for policy-notification-0 (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 14:59:44,549] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,549] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,550] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NonExistentReplica to NewReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,550] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,558] INFO [Controller id=1 epoch=1] Changed partition policy-notification-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,558] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-notification-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,558] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,559] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,559] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-notification-0 from NewReplica to OnlineReplica (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,559] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,570] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 1 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,570] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,570] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition policy-notification-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,571] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-notification-0) (kafka.server.ReplicaFetcherManager) 15:03:04 kafka | [2025-06-13 14:59:44,572] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,576] INFO [LogLoader partition=policy-notification-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 15:03:04 kafka | [2025-06-13 14:59:44,577] INFO Created log for partition policy-notification-0 in /var/lib/kafka/data/policy-notification-0 with properties {} (kafka.log.LogManager) 15:03:04 kafka | [2025-06-13 14:59:44,578] INFO [Partition policy-notification-0 broker=1] No checkpointed highwatermark is found for partition policy-notification-0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:59:44,579] INFO [Partition policy-notification-0 broker=1] Log loaded for partition policy-notification-0 with initial high watermark 0 (kafka.cluster.Partition) 15:03:04 kafka | [2025-06-13 14:59:44,579] INFO [Broker id=1] Leader policy-notification-0 with topic id Some(eD99wZpyRpy5GlLcA84t2Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,583] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition policy-notification-0 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,584] INFO [Broker id=1] Finished LeaderAndIsr request in 14ms correlationId 3 from controller 1 for 1 partitions (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,585] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=eD99wZpyRpy5GlLcA84t2Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,590] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-notification', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-notification-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,590] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 15:03:04 kafka | [2025-06-13 14:59:44,591] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 15:03:04 kafka | [2025-06-13 15:01:07,471] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-c01ef646-b601-4a05-85b2-0938b98c522c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:07,472] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 0 (__consumer_offsets-3) (reason: Adding new member rdkafka-c01ef646-b601-4a05-85b2-0938b98c522c with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:10,474] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 1 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:10,477] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-c01ef646-b601-4a05-85b2-0938b98c522c for group testgrp for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:10,595] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 1 (__consumer_offsets-3) (reason: Removing member rdkafka-c01ef646-b601-4a05-85b2-0938b98c522c on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:10,597] INFO [GroupCoordinator 1]: Group testgrp with generation 2 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:10,599] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-c01ef646-b601-4a05-85b2-0938b98c522c, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:32,326] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-eb2fa9f1-0423-4199-b4c9-06e6ec57f156 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:32,328] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 2 (__consumer_offsets-3) (reason: Adding new member rdkafka-eb2fa9f1-0423-4199-b4c9-06e6ec57f156 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:35,329] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 3 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:35,332] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-eb2fa9f1-0423-4199-b4c9-06e6ec57f156 for group testgrp for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:35,340] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 3 (__consumer_offsets-3) (reason: Removing member rdkafka-eb2fa9f1-0423-4199-b4c9-06e6ec57f156 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:35,340] INFO [GroupCoordinator 1]: Group testgrp with generation 4 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:35,341] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-eb2fa9f1-0423-4199-b4c9-06e6ec57f156, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:57,879] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group testgrp in Empty state. Created a new member id rdkafka-51bd9d0e-f4d7-4c54-b2f8-5ac2e581ea83 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:01:57,880] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 4 (__consumer_offsets-3) (reason: Adding new member rdkafka-51bd9d0e-f4d7-4c54-b2f8-5ac2e581ea83 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:00,882] INFO [GroupCoordinator 1]: Stabilized group testgrp generation 5 (__consumer_offsets-3) with 1 members (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:00,885] INFO [GroupCoordinator 1]: Assignment received from leader rdkafka-51bd9d0e-f4d7-4c54-b2f8-5ac2e581ea83 for group testgrp for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:00,891] INFO [GroupCoordinator 1]: Preparing to rebalance group testgrp in state PreparingRebalance with old generation 5 (__consumer_offsets-3) (reason: Removing member rdkafka-51bd9d0e-f4d7-4c54-b2f8-5ac2e581ea83 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:00,891] INFO [GroupCoordinator 1]: Group testgrp with generation 6 is now empty (__consumer_offsets-3) (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:00,892] INFO [GroupCoordinator 1]: Member MemberMetadata(memberId=rdkafka-51bd9d0e-f4d7-4c54-b2f8-5ac2e581ea83, groupInstanceId=None, clientId=rdkafka, clientHost=/172.17.0.6, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group testgrp through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) 15:03:04 kafka | [2025-06-13 15:02:03,161] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 15:02:03,162] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 15:02:03,167] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 HashMap() (kafka.controller.KafkaController) 15:03:04 kafka | [2025-06-13 15:02:03,168] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) 15:03:04 policy-api | Waiting for policy-db-migrator port 6824... 15:03:04 policy-api | policy-db-migrator (172.17.0.6:6824) open 15:03:04 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 15:03:04 policy-api | 15:03:04 policy-api | . ____ _ __ _ _ 15:03:04 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 15:03:04 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 15:03:04 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 15:03:04 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 15:03:04 policy-api | =========|_|==============|___/=/_/_/_/ 15:03:04 policy-api | 15:03:04 policy-api | :: Spring Boot :: (v3.4.6) 15:03:04 policy-api | 15:03:04 policy-api | [2025-06-13T14:57:24.384+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.2.Final 15:03:04 policy-api | [2025-06-13T14:57:24.461+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.15 with PID 73 (/app/api.jar started by policy in /opt/app/policy/api/bin) 15:03:04 policy-api | [2025-06-13T14:57:24.462+00:00|INFO|PolicyApiApplication|main] The following 1 profile is active: "default" 15:03:04 policy-api | [2025-06-13T14:57:26.396+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 15:03:04 policy-api | [2025-06-13T14:57:26.628+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 218 ms. Found 6 JPA repository interfaces. 15:03:04 policy-api | [2025-06-13T14:57:27.414+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 15:03:04 policy-api | [2025-06-13T14:57:27.429+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 15:03:04 policy-api | [2025-06-13T14:57:27.430+00:00|INFO|StandardService|main] Starting service [Tomcat] 15:03:04 policy-api | [2025-06-13T14:57:27.431+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 15:03:04 policy-api | [2025-06-13T14:57:27.472+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 15:03:04 policy-api | [2025-06-13T14:57:27.473+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2936 ms 15:03:04 policy-api | [2025-06-13T14:57:27.837+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 15:03:04 policy-api | [2025-06-13T14:57:27.929+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 15:03:04 policy-api | [2025-06-13T14:57:27.985+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 15:03:04 policy-api | [2025-06-13T14:57:28.426+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 15:03:04 policy-api | [2025-06-13T14:57:28.468+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 15:03:04 policy-api | [2025-06-13T14:57:28.722+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@612bb755 15:03:04 policy-api | [2025-06-13T14:57:28.725+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 15:03:04 policy-api | [2025-06-13T14:57:28.835+00:00|INFO|pooling|main] HHH10001005: Database info: 15:03:04 policy-api | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 15:03:04 policy-api | Database driver: undefined/unknown 15:03:04 policy-api | Database version: 16.4 15:03:04 policy-api | Autocommit mode: undefined/unknown 15:03:04 policy-api | Isolation level: undefined/unknown 15:03:04 policy-api | Minimum pool size: undefined/unknown 15:03:04 policy-api | Maximum pool size: undefined/unknown 15:03:04 policy-api | [2025-06-13T14:57:31.112+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 15:03:04 policy-api | [2025-06-13T14:57:31.116+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 15:03:04 policy-api | [2025-06-13T14:57:31.904+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 15:03:04 policy-api | [2025-06-13T14:57:32.904+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 15:03:04 policy-api | [2025-06-13T14:57:34.145+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 15:03:04 policy-api | [2025-06-13T14:57:34.215+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 15:03:04 policy-api | [2025-06-13T14:57:35.056+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 15:03:04 policy-api | [2025-06-13T14:57:35.235+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 15:03:04 policy-api | [2025-06-13T14:57:35.267+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/api/v1' 15:03:04 policy-api | [2025-06-13T14:57:35.298+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.917 seconds (process running for 12.558) 15:03:04 policy-api | [2025-06-13T14:57:39.920+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 15:03:04 policy-api | [2025-06-13T14:57:39.921+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 15:03:04 policy-api | [2025-06-13T14:57:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 15:03:04 policy-api | [2025-06-13T15:00:45.312+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-6] ***** OrderedServiceImpl implementers: 15:03:04 policy-api | [] 15:03:04 policy-api | [2025-06-13T15:02:01.222+00:00|WARN|CommonRestController|http-nio-6969-exec-1] "incoming fragment" INVALID, item has status INVALID 15:03:04 policy-api | item "entity" value "abac:1.0.7" INVALID, does not equal existing entity 15:03:04 policy-api | 15:03:04 policy-csit | Invoking the robot tests from: opa-pdp-test.robot opa-pdp-slas.robot 15:03:04 policy-csit | Run Robot test 15:03:04 policy-csit | ROBOT_VARIABLES=-v DATA:/opt/robotworkspace/models/models-examples/src/main/resources/policies 15:03:04 policy-csit | -v NODETEMPLATES:/opt/robotworkspace/models/models-examples/src/main/resources/nodetemplates 15:03:04 policy-csit | -v POLICY_API_IP:policy-api:6969 15:03:04 policy-csit | -v POLICY_RUNTIME_ACM_IP:policy-clamp-runtime-acm:6969 15:03:04 policy-csit | -v POLICY_PARTICIPANT_SIM_IP:policy-clamp-ac-sim-ppnt:6969 15:03:04 policy-csit | -v POLICY_PAP_IP:policy-pap:6969 15:03:04 policy-csit | -v APEX_IP:policy-apex-pdp:6969 15:03:04 policy-csit | -v APEX_EVENTS_IP:policy-apex-pdp:23324 15:03:04 policy-csit | -v KAFKA_IP:kafka:9092 15:03:04 policy-csit | -v PROMETHEUS_IP:prometheus:9090 15:03:04 policy-csit | -v POLICY_PDPX_IP:policy-xacml-pdp:6969 15:03:04 policy-csit | -v POLICY_OPA_IP:policy-opa-pdp:8282 15:03:04 policy-csit | -v POLICY_DROOLS_IP:policy-drools-pdp:9696 15:03:04 policy-csit | -v DROOLS_IP:policy-drools-apps:6969 15:03:04 policy-csit | -v DROOLS_IP_2:policy-drools-apps:9696 15:03:04 policy-csit | -v TEMP_FOLDER:/tmp/distribution 15:03:04 policy-csit | -v DISTRIBUTION_IP:policy-distribution:6969 15:03:04 policy-csit | -v TEST_ENV:docker 15:03:04 policy-csit | -v JAEGER_IP:jaeger:16686 15:03:04 policy-csit | Starting Robot test suites ... 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Healthcheck :: Verify OPA PDP health check | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidateDataBeforePolicyDeployment | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidatesZonePolicy | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidatesVehiclePolicy | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidatesAbacPolicy | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Test | PASS | 15:03:04 policy-csit | 5 tests, 5 passed, 0 failed 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | WaitForPrometheusServer :: Sleep time to wait for Prometheus serve... | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidateOPAPolicyDecisionsTotalCounter :: Validate opa policy deci... | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidateOPAPolicyDataTotalCounter :: Validate opa policy data coun... | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidateOPADecisionAverageResponseTime :: Ensure average response ... | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | ValidateOPADataAverageResponseTime :: Ensure average response time... | PASS | 15:03:04 policy-csit | ------------------------------------------------------------------------------ 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas.Opa-Pdp-Slas | PASS | 15:03:04 policy-csit | 5 tests, 5 passed, 0 failed 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Opa-Pdp-Test & Opa-Pdp-Slas | PASS | 15:03:04 policy-csit | 10 tests, 10 passed, 0 failed 15:03:04 policy-csit | ============================================================================== 15:03:04 policy-csit | Output: /tmp/results/output.xml 15:03:04 policy-csit | Log: /tmp/results/log.html 15:03:04 policy-csit | Report: /tmp/results/report.html 15:03:04 policy-csit | RESULT: 0 15:03:05 policy-db-migrator | Waiting for postgres port 5432... 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | nc: connect to postgres (172.17.0.2) port 5432 (tcp) failed: Connection refused 15:03:05 policy-db-migrator | Connection to postgres (172.17.0.2) 5432 port [tcp/postgresql] succeeded! 15:03:05 policy-db-migrator | Initializing policyadmin... 15:03:05 policy-db-migrator | 321 blocks 15:03:05 policy-db-migrator | Preparing upgrade release version: 0800 15:03:05 policy-db-migrator | Preparing upgrade release version: 0900 15:03:05 policy-db-migrator | Preparing upgrade release version: 1000 15:03:05 policy-db-migrator | Preparing upgrade release version: 1100 15:03:05 policy-db-migrator | Preparing upgrade release version: 1200 15:03:05 policy-db-migrator | Preparing upgrade release version: 1300 15:03:05 policy-db-migrator | Done 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | -------------+--------- 15:03:05 policy-db-migrator | policyadmin | 0 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:03:05 policy-db-migrator | (0 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | upgrade: 0 -> 1300 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0450-pdpgroup.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0470-pdp.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0570-toscadatatype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0630-toscanodetype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0660-toscaparameter.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0670-toscapolicies.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0690-toscapolicy.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0730-toscaproperty.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0770-toscarequirement.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0780-toscarequirements.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0820-toscatrigger.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-pdp.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0210-sequence.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0220-sequence.sql 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0120-toscatrigger.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0140-toscaparameter.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0150-toscaproperty.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-upgrade.sql 15:03:05 policy-db-migrator | msg 15:03:05 policy-db-migrator | --------------------------- 15:03:05 policy-db-migrator | upgrade to 1100 completed 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 15:03:05 policy-db-migrator | DROP INDEX 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0120-audit_sequence.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 15:03:05 policy-db-migrator | DROP TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | policyadmin: OK: upgrade (1300) 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "policyadmin_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | -------------+--------- 15:03:05 policy-db-migrator | policyadmin | 1300 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | -----+---------------------------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:03:05 policy-db-migrator | 1 | 0100-jpapdpgroup_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.053759 15:03:05 policy-db-migrator | 2 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.179233 15:03:05 policy-db-migrator | 3 | 0120-jpapdpsubgroup_policies.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.318148 15:03:05 policy-db-migrator | 4 | 0130-jpapdpsubgroup_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.42566 15:03:05 policy-db-migrator | 5 | 0140-jpapdpsubgroup_supportedpolicytypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.555803 15:03:05 policy-db-migrator | 6 | 0150-jpatoscacapabilityassignment_attributes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.663017 15:03:05 policy-db-migrator | 7 | 0160-jpatoscacapabilityassignment_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.824595 15:03:05 policy-db-migrator | 8 | 0170-jpatoscacapabilityassignment_occurrences.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:56:59.900553 15:03:05 policy-db-migrator | 9 | 0180-jpatoscacapabilityassignment_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.039213 15:03:05 policy-db-migrator | 10 | 0190-jpatoscacapabilitytype_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.167297 15:03:05 policy-db-migrator | 11 | 0200-jpatoscacapabilitytype_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.225132 15:03:05 policy-db-migrator | 12 | 0210-jpatoscadatatype_constraints.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.297872 15:03:05 policy-db-migrator | 13 | 0220-jpatoscadatatype_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.499987 15:03:05 policy-db-migrator | 14 | 0230-jpatoscadatatype_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.565956 15:03:05 policy-db-migrator | 15 | 0240-jpatoscanodetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.710519 15:03:05 policy-db-migrator | 16 | 0250-jpatoscanodetemplate_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.854158 15:03:05 policy-db-migrator | 17 | 0260-jpatoscanodetype_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:00.974406 15:03:05 policy-db-migrator | 18 | 0270-jpatoscanodetype_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.1787 15:03:05 policy-db-migrator | 19 | 0280-jpatoscapolicy_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.24538 15:03:05 policy-db-migrator | 20 | 0290-jpatoscapolicy_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.341173 15:03:05 policy-db-migrator | 21 | 0300-jpatoscapolicy_targets.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.436048 15:03:05 policy-db-migrator | 22 | 0310-jpatoscapolicytype_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.600817 15:03:05 policy-db-migrator | 23 | 0320-jpatoscapolicytype_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.747248 15:03:05 policy-db-migrator | 24 | 0330-jpatoscapolicytype_targets.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.813186 15:03:05 policy-db-migrator | 25 | 0340-jpatoscapolicytype_triggers.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:01.877395 15:03:05 policy-db-migrator | 26 | 0350-jpatoscaproperty_constraints.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.003252 15:03:05 policy-db-migrator | 27 | 0360-jpatoscaproperty_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.162319 15:03:05 policy-db-migrator | 28 | 0370-jpatoscarelationshiptype_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.252808 15:03:05 policy-db-migrator | 29 | 0380-jpatoscarelationshiptype_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.363963 15:03:05 policy-db-migrator | 30 | 0390-jpatoscarequirement_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.510132 15:03:05 policy-db-migrator | 31 | 0400-jpatoscarequirement_occurrences.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.613369 15:03:05 policy-db-migrator | 32 | 0410-jpatoscarequirement_properties.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.725054 15:03:05 policy-db-migrator | 33 | 0420-jpatoscaservicetemplate_metadata.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:02.875445 15:03:05 policy-db-migrator | 34 | 0430-jpatoscatopologytemplate_inputs.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.062856 15:03:05 policy-db-migrator | 35 | 0440-pdpgroup_pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.157807 15:03:05 policy-db-migrator | 36 | 0450-pdpgroup.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.270949 15:03:05 policy-db-migrator | 37 | 0460-pdppolicystatus.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.420612 15:03:05 policy-db-migrator | 38 | 0470-pdp.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.541897 15:03:05 policy-db-migrator | 39 | 0480-pdpstatistics.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.644586 15:03:05 policy-db-migrator | 40 | 0490-pdpsubgroup_pdp.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.821715 15:03:05 policy-db-migrator | 41 | 0500-pdpsubgroup.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:03.900978 15:03:05 policy-db-migrator | 42 | 0510-toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.109222 15:03:05 policy-db-migrator | 43 | 0520-toscacapabilityassignments.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.222914 15:03:05 policy-db-migrator | 44 | 0530-toscacapabilityassignments_toscacapabilityassignment.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.407434 15:03:05 policy-db-migrator | 45 | 0540-toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.608776 15:03:05 policy-db-migrator | 46 | 0550-toscacapabilitytypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.700162 15:03:05 policy-db-migrator | 47 | 0560-toscacapabilitytypes_toscacapabilitytype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:04.838402 15:03:05 policy-db-migrator | 48 | 0570-toscadatatype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.021675 15:03:05 policy-db-migrator | 49 | 0580-toscadatatypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.117236 15:03:05 policy-db-migrator | 50 | 0590-toscadatatypes_toscadatatype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.278679 15:03:05 policy-db-migrator | 51 | 0600-toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.424944 15:03:05 policy-db-migrator | 52 | 0610-toscanodetemplates.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.552431 15:03:05 policy-db-migrator | 53 | 0620-toscanodetemplates_toscanodetemplate.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.639031 15:03:05 policy-db-migrator | 54 | 0630-toscanodetype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:05.830836 15:03:05 policy-db-migrator | 55 | 0640-toscanodetypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.026002 15:03:05 policy-db-migrator | 56 | 0650-toscanodetypes_toscanodetype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.183819 15:03:05 policy-db-migrator | 57 | 0660-toscaparameter.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.328498 15:03:05 policy-db-migrator | 58 | 0670-toscapolicies.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.413835 15:03:05 policy-db-migrator | 59 | 0680-toscapolicies_toscapolicy.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.629541 15:03:05 policy-db-migrator | 60 | 0690-toscapolicy.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:06.831821 15:03:05 policy-db-migrator | 61 | 0700-toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.086234 15:03:05 policy-db-migrator | 62 | 0710-toscapolicytypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.341684 15:03:05 policy-db-migrator | 63 | 0720-toscapolicytypes_toscapolicytype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.472852 15:03:05 policy-db-migrator | 64 | 0730-toscaproperty.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.738706 15:03:05 policy-db-migrator | 65 | 0740-toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.906068 15:03:05 policy-db-migrator | 66 | 0750-toscarelationshiptypes.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:07.990683 15:03:05 policy-db-migrator | 67 | 0760-toscarelationshiptypes_toscarelationshiptype.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.076334 15:03:05 policy-db-migrator | 68 | 0770-toscarequirement.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.210704 15:03:05 policy-db-migrator | 69 | 0780-toscarequirements.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.300124 15:03:05 policy-db-migrator | 70 | 0790-toscarequirements_toscarequirement.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.456459 15:03:05 policy-db-migrator | 71 | 0800-toscaservicetemplate.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.594344 15:03:05 policy-db-migrator | 72 | 0810-toscatopologytemplate.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.76828 15:03:05 policy-db-migrator | 73 | 0820-toscatrigger.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:08.906129 15:03:05 policy-db-migrator | 74 | 0830-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.057413 15:03:05 policy-db-migrator | 75 | 0840-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.171141 15:03:05 policy-db-migrator | 76 | 0850-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.364095 15:03:05 policy-db-migrator | 77 | 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.439385 15:03:05 policy-db-migrator | 78 | 0870-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.573647 15:03:05 policy-db-migrator | 79 | 0880-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.683943 15:03:05 policy-db-migrator | 80 | 0890-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.832899 15:03:05 policy-db-migrator | 81 | 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:09.921588 15:03:05 policy-db-migrator | 82 | 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.00975 15:03:05 policy-db-migrator | 83 | 0920-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.146896 15:03:05 policy-db-migrator | 84 | 0940-PdpPolicyStatus_PdpGroup.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.26562 15:03:05 policy-db-migrator | 85 | 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.398921 15:03:05 policy-db-migrator | 86 | 0960-FK_ToscaNodeTemplate_capabilitiesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.492368 15:03:05 policy-db-migrator | 87 | 0970-FK_ToscaNodeTemplate_requirementsName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.598286 15:03:05 policy-db-migrator | 88 | 0980-FK_ToscaNodeType_requirementsName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.674298 15:03:05 policy-db-migrator | 89 | 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.734813 15:03:05 policy-db-migrator | 90 | 1000-FK_ToscaServiceTemplate_dataTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.819375 15:03:05 policy-db-migrator | 91 | 1010-FK_ToscaServiceTemplate_nodeTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:10.960667 15:03:05 policy-db-migrator | 92 | 1020-FK_ToscaServiceTemplate_policyTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:11.043764 15:03:05 policy-db-migrator | 93 | 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:11.128569 15:03:05 policy-db-migrator | 94 | 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:11.212968 15:03:05 policy-db-migrator | 95 | 1050-FK_ToscaTopologyTemplate_policyName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:11.306793 15:03:05 policy-db-migrator | 96 | 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql | upgrade | 0 | 0800 | 1306251456580800u | 1 | 2025-06-13 14:57:11.395274 15:03:05 policy-db-migrator | 97 | 0100-pdp.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.469525 15:03:05 policy-db-migrator | 98 | 0110-idx_tsidx1.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.626866 15:03:05 policy-db-migrator | 99 | 0120-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.719083 15:03:05 policy-db-migrator | 100 | 0130-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.792423 15:03:05 policy-db-migrator | 101 | 0140-pk_pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.915844 15:03:05 policy-db-migrator | 102 | 0150-pdpstatistics.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:11.978238 15:03:05 policy-db-migrator | 103 | 0160-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.053589 15:03:05 policy-db-migrator | 104 | 0170-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.111898 15:03:05 policy-db-migrator | 105 | 0180-jpapdpstatistics_enginestats.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.196341 15:03:05 policy-db-migrator | 106 | 0190-jpapolicyaudit.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.306165 15:03:05 policy-db-migrator | 107 | 0200-JpaPolicyAuditIndex_timestamp.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.435731 15:03:05 policy-db-migrator | 108 | 0210-sequence.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.567638 15:03:05 policy-db-migrator | 109 | 0220-sequence.sql | upgrade | 0800 | 0900 | 1306251456580900u | 1 | 2025-06-13 14:57:12.667666 15:03:05 policy-db-migrator | 110 | 0100-jpatoscapolicy_targets.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:12.840627 15:03:05 policy-db-migrator | 111 | 0110-jpatoscapolicytype_targets.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:12.96512 15:03:05 policy-db-migrator | 112 | 0120-toscatrigger.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:13.1105 15:03:05 policy-db-migrator | 113 | 0130-jpatoscapolicytype_triggers.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:13.417587 15:03:05 policy-db-migrator | 114 | 0140-toscaparameter.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:13.575971 15:03:05 policy-db-migrator | 115 | 0150-toscaproperty.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:13.723385 15:03:05 policy-db-migrator | 116 | 0160-jpapolicyaudit_pk.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:13.948829 15:03:05 policy-db-migrator | 117 | 0170-pdpstatistics_pk.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:14.110742 15:03:05 policy-db-migrator | 118 | 0180-jpatoscanodetemplate_metadata.sql | upgrade | 0900 | 1000 | 1306251456581000u | 1 | 2025-06-13 14:57:14.195417 15:03:05 policy-db-migrator | 119 | 0100-upgrade.sql | upgrade | 1000 | 1100 | 1306251456581100u | 1 | 2025-06-13 14:57:14.2712 15:03:05 policy-db-migrator | 120 | 0100-jpapolicyaudit_renameuser.sql | upgrade | 1100 | 1200 | 1306251456581200u | 1 | 2025-06-13 14:57:14.371282 15:03:05 policy-db-migrator | 121 | 0110-idx_tsidx1.sql | upgrade | 1100 | 1200 | 1306251456581200u | 1 | 2025-06-13 14:57:14.491664 15:03:05 policy-db-migrator | 122 | 0120-audit_sequence.sql | upgrade | 1100 | 1200 | 1306251456581200u | 1 | 2025-06-13 14:57:14.626485 15:03:05 policy-db-migrator | 123 | 0130-statistics_sequence.sql | upgrade | 1100 | 1200 | 1306251456581200u | 1 | 2025-06-13 14:57:14.733721 15:03:05 policy-db-migrator | 124 | 0100-pdpstatistics.sql | upgrade | 1200 | 1300 | 1306251456581300u | 1 | 2025-06-13 14:57:14.807741 15:03:05 policy-db-migrator | 125 | 0110-jpapdpstatistics_enginestats.sql | upgrade | 1200 | 1300 | 1306251456581300u | 1 | 2025-06-13 14:57:14.913479 15:03:05 policy-db-migrator | 126 | 0120-statistics_sequence.sql | upgrade | 1200 | 1300 | 1306251456581300u | 1 | 2025-06-13 14:57:14.979221 15:03:05 policy-db-migrator | (126 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | policyadmin: OK @ 1300 15:03:05 policy-db-migrator | Initializing clampacm... 15:03:05 policy-db-migrator | 97 blocks 15:03:05 policy-db-migrator | Preparing upgrade release version: 1400 15:03:05 policy-db-migrator | Preparing upgrade release version: 1500 15:03:05 policy-db-migrator | Preparing upgrade release version: 1600 15:03:05 policy-db-migrator | Preparing upgrade release version: 1601 15:03:05 policy-db-migrator | Preparing upgrade release version: 1700 15:03:05 policy-db-migrator | Preparing upgrade release version: 1701 15:03:05 policy-db-migrator | Done 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | ----------+--------- 15:03:05 policy-db-migrator | clampacm | 0 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:03:05 policy-db-migrator | (0 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | clampacm: upgrade available: 0 -> 1701 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | upgrade: 0 -> 1701 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0400-nodetemplatestate.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0500-participant.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0600-participantsupportedelements.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0700-ac_compositionId_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0800-ac_element_fk_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0900-dt_element_fk_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1000-supported_element_fk_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1100-automationcompositionelement_fk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1200-nodetemplate_fk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 1300-participantsupportedelements_fk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-automationcompositiondefinition.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0300-participantreplica.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0400-participant.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0500-participant_replica_fk_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0600-participant_replica_fk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0700-automationcompositionelement.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0800-nodetemplatestate.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-automationcomposition.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-automationcompositionelement.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-message.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-messagejob.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0300-messagejob_identificationId_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-automationcompositionrollback.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0200-automationcomposition.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0300-automationcompositionelement.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0400-automationcomposition_fk.sql 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0500-automationcompositiondefinition.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0600-nodetemplatestate.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0700-mb_identificationId_index.sql 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0800-participantreplica.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0900-participantsupportedacelements.sql 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | UPDATE 0 15:03:05 policy-db-migrator | ALTER TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | clampacm: OK: upgrade (1701) 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "clampacm_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | ----------+--------- 15:03:05 policy-db-migrator | clampacm | 1701 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------------------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:03:05 policy-db-migrator | 1 | 0100-automationcomposition.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:15.824342 15:03:05 policy-db-migrator | 2 | 0200-automationcompositiondefinition.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:15.946905 15:03:05 policy-db-migrator | 3 | 0300-automationcompositionelement.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:16.121011 15:03:05 policy-db-migrator | 4 | 0400-nodetemplatestate.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:16.317684 15:03:05 policy-db-migrator | 5 | 0500-participant.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:16.550787 15:03:05 policy-db-migrator | 6 | 0600-participantsupportedelements.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:16.724431 15:03:05 policy-db-migrator | 7 | 0700-ac_compositionId_index.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:16.915452 15:03:05 policy-db-migrator | 8 | 0800-ac_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.017833 15:03:05 policy-db-migrator | 9 | 0900-dt_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.171533 15:03:05 policy-db-migrator | 10 | 1000-supported_element_fk_index.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.253084 15:03:05 policy-db-migrator | 11 | 1100-automationcompositionelement_fk.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.313724 15:03:05 policy-db-migrator | 12 | 1200-nodetemplate_fk.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.387638 15:03:05 policy-db-migrator | 13 | 1300-participantsupportedelements_fk.sql | upgrade | 1300 | 1400 | 1306251457151400u | 1 | 2025-06-13 14:57:17.517567 15:03:05 policy-db-migrator | 14 | 0100-automationcomposition.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.573643 15:03:05 policy-db-migrator | 15 | 0200-automationcompositiondefinition.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.640773 15:03:05 policy-db-migrator | 16 | 0300-participantreplica.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.693203 15:03:05 policy-db-migrator | 17 | 0400-participant.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.744706 15:03:05 policy-db-migrator | 18 | 0500-participant_replica_fk_index.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.822816 15:03:05 policy-db-migrator | 19 | 0600-participant_replica_fk.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.886094 15:03:05 policy-db-migrator | 20 | 0700-automationcompositionelement.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:17.965867 15:03:05 policy-db-migrator | 21 | 0800-nodetemplatestate.sql | upgrade | 1400 | 1500 | 1306251457151500u | 1 | 2025-06-13 14:57:18.018702 15:03:05 policy-db-migrator | 22 | 0100-automationcomposition.sql | upgrade | 1500 | 1600 | 1306251457151600u | 1 | 2025-06-13 14:57:18.089442 15:03:05 policy-db-migrator | 23 | 0200-automationcompositionelement.sql | upgrade | 1500 | 1600 | 1306251457151600u | 1 | 2025-06-13 14:57:18.1404 15:03:05 policy-db-migrator | 24 | 0100-automationcomposition.sql | upgrade | 1501 | 1601 | 1306251457151601u | 1 | 2025-06-13 14:57:18.191564 15:03:05 policy-db-migrator | 25 | 0200-automationcompositionelement.sql | upgrade | 1501 | 1601 | 1306251457151601u | 1 | 2025-06-13 14:57:18.248447 15:03:05 policy-db-migrator | 26 | 0100-message.sql | upgrade | 1600 | 1700 | 1306251457151700u | 1 | 2025-06-13 14:57:18.424401 15:03:05 policy-db-migrator | 27 | 0200-messagejob.sql | upgrade | 1600 | 1700 | 1306251457151700u | 1 | 2025-06-13 14:57:18.72823 15:03:05 policy-db-migrator | 28 | 0300-messagejob_identificationId_index.sql | upgrade | 1600 | 1700 | 1306251457151700u | 1 | 2025-06-13 14:57:18.873237 15:03:05 policy-db-migrator | 29 | 0100-automationcompositionrollback.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:18.961101 15:03:05 policy-db-migrator | 30 | 0200-automationcomposition.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.026455 15:03:05 policy-db-migrator | 31 | 0300-automationcompositionelement.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.087044 15:03:05 policy-db-migrator | 32 | 0400-automationcomposition_fk.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.184887 15:03:05 policy-db-migrator | 33 | 0500-automationcompositiondefinition.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.277147 15:03:05 policy-db-migrator | 34 | 0600-nodetemplatestate.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.365526 15:03:05 policy-db-migrator | 35 | 0700-mb_identificationId_index.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.498969 15:03:05 policy-db-migrator | 36 | 0800-participantreplica.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.655659 15:03:05 policy-db-migrator | 37 | 0900-participantsupportedacelements.sql | upgrade | 1601 | 1701 | 1306251457151701u | 1 | 2025-06-13 14:57:19.751271 15:03:05 policy-db-migrator | (37 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | clampacm: OK @ 1701 15:03:05 policy-db-migrator | Initializing pooling... 15:03:05 policy-db-migrator | 4 blocks 15:03:05 policy-db-migrator | Preparing upgrade release version: 1600 15:03:05 policy-db-migrator | Done 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | ---------+--------- 15:03:05 policy-db-migrator | pooling | 0 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:03:05 policy-db-migrator | (0 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | pooling: upgrade available: 0 -> 1600 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | upgrade: 0 -> 1600 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-distributed.locking.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | pooling: OK: upgrade (1600) 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | NOTICE: relation "pooling_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | ---------+--------- 15:03:05 policy-db-migrator | pooling | 1600 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:03:05 policy-db-migrator | 1 | 0100-distributed.locking.sql | upgrade | 1500 | 1600 | 1306251457201600u | 1 | 2025-06-13 14:57:20.853756 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | pooling: OK @ 1600 15:03:05 policy-db-migrator | Initializing operationshistory... 15:03:05 policy-db-migrator | 6 blocks 15:03:05 policy-db-migrator | Preparing upgrade release version: 1600 15:03:05 policy-db-migrator | Done 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | -------------------+--------- 15:03:05 policy-db-migrator | operationshistory | 0 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------+-----------+--------------+------------+-----+---------+-------- 15:03:05 policy-db-migrator | (0 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | operationshistory: upgrade available: 0 -> 1600 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | upgrade: 0 -> 1600 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0100-ophistory_id_sequence.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | rc=0 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | > upgrade 0110-operationshistory.sql 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | CREATE INDEX 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | INSERT 0 1 15:03:05 policy-db-migrator | operationshistory: OK: upgrade (1600) 15:03:05 policy-db-migrator | List of databases 15:03:05 policy-db-migrator | Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges 15:03:05 policy-db-migrator | -------------------+-------------+----------+-----------------+------------+------------+------------+-----------+----------------------------- 15:03:05 policy-db-migrator | clampacm | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | migration | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | operationshistory | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyadmin | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | policyclamp | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | pooling | policy_user | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =Tc/policy_user + 15:03:05 policy-db-migrator | | | | | | | | | policy_user=CTc/policy_user 15:03:05 policy-db-migrator | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | 15:03:05 policy-db-migrator | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + 15:03:05 policy-db-migrator | | | | | | | | | postgres=CTc/postgres 15:03:05 policy-db-migrator | (9 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "schema_versions" already exists, skipping 15:03:05 policy-db-migrator | CREATE TABLE 15:03:05 policy-db-migrator | NOTICE: relation "operationshistory_schema_changelog" already exists, skipping 15:03:05 policy-db-migrator | name | version 15:03:05 policy-db-migrator | -------------------+--------- 15:03:05 policy-db-migrator | operationshistory | 1600 15:03:05 policy-db-migrator | (1 row) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | id | script | operation | from_version | to_version | tag | success | attime 15:03:05 policy-db-migrator | ----+--------------------------------+-----------+--------------+------------+-------------------+---------+---------------------------- 15:03:05 policy-db-migrator | 1 | 0100-ophistory_id_sequence.sql | upgrade | 1500 | 1600 | 1306251457211600u | 1 | 2025-06-13 14:57:21.865968 15:03:05 policy-db-migrator | 2 | 0110-operationshistory.sql | upgrade | 1500 | 1600 | 1306251457211600u | 1 | 2025-06-13 14:57:22.03333 15:03:05 policy-db-migrator | (2 rows) 15:03:05 policy-db-migrator | 15:03:05 policy-db-migrator | operationshistory: OK @ 1600 15:03:05 policy-opa-pdp | Waiting for kafka port 9092... 15:03:05 policy-opa-pdp | nc: connect to kafka (172.17.0.7) port 9092 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to kafka (172.17.0.7) port 9092 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to kafka (172.17.0.7) port 9092 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to kafka (172.17.0.7) port 9092 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | Connection to kafka (172.17.0.7) 9092 port [tcp/*] succeeded! 15:03:05 policy-opa-pdp | Waiting for pap port 6969... 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | nc: connect to pap (172.17.0.9) port 6969 (tcp) failed: Connection refused 15:03:05 policy-opa-pdp | Connection to pap (172.17.0.9) 6969 port [tcp/*] succeeded! 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="###################################### " 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="OPA-PDP: Starting initialisation " 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="###################################### " 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="KAFKA_URL not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="PAP_TOPIC not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="PATCH_TOPIC not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="PATCH_GROUPID not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="API_USER not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="API_PASSWORD not defined, using default value" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="UseSASLForKAFKA not defined, using default value" 15:03:05 policy-opa-pdp | decodedConfig org.apache.kafka.common.security.scram.ScramLoginModule required username="policy-opa-pdp-ku" password="" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="Username: " 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="Password: " 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=warning msg="USE_KAFKA_FOR_PATCH not defined, using default value: false" 15:03:05 policy-opa-pdp | time="2025-06-13T14:58:28Z" level=debug msg="Configuration module: environment initialised" 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:28.7368+00:00] logger initialised Filepath = /var/logs/logs.log, Logsize(MB) = 10, Backups = 3, Loglevel = debug 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:28.7370+00:00] Name: opa-641304f2-5b4c-46df-814c-634a7e4652a2 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:28.7411+00:00] Starting OPA PDP Service 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:58:33.7455+00:00] HTTP server started 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7467+00:00] Create an instance of OPA Object 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7468+00:00] Configure an instance of OPA Object 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7478+00:00] Topic start :::: policy-pdp-pap 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7481+00:00] Creating Kafka Consumer singleton instance 15:03:05 policy-opa-pdp | &map[auto.offset.reset:latest bootstrap.servers:kafka:9092 group.id:opa-pdp]DEBU[2025-06-13T14:58:33.7512+00:00] Topic Subscribed: policy-pdp-pap 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7513+00:00] Created SIngleton consumer instance 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:33.7650+00:00] Starting PDP Message Listener..... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:43.7654+00:00] New Ticker started with interval 60000 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:58:53.7747+00:00] After registration successful delay 15:03:05 policy-opa-pdp | 2025/06/13 14:59:43 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:43.7860+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"30da412e-18d5-4103-a9b9-1b79bf0032e2","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749826783782","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:43.7861+00:00] Sending Heartbeat ... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:43.8225+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"30da412e-18d5-4103-a9b9-1b79bf0032e2","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749826783782","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:43.8226+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:43.8227+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4554+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"1f423618-4816-40d2-af95-c2f95c4a4e89","timestampMs":1749826784381,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4557+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4563+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"1f423618-4816-40d2-af95-c2f95c4a4e89","timestampMs":1749826784381,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4565+00:00] Policy Is Allowed: slice.capacity.check 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4565+00:00] Validating properties data for policy: slice.capacity.check 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4565+00:00] Validating properties policy for policy: slice.capacity.check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4565+00:00] Validation successful for policy: slice.capacity.check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4602+00:00] Directory created: /opt/policies/slice/capacity/check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4603+00:00] Policy file saved: /opt/policies/slice/capacity/check/policy.rego 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4606+00:00] Directory created: /opt/data/node/slice/capacity/check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4607+00:00] Data file saved: /opt/data/node/slice/capacity/check/data.json 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4608+00:00] Before calling combinedoutput 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4833+00:00] Bundle Built Sucessfully.... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4854+00:00] storage not found creating : /node 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4854+00:00] storage not found creating : /node/slice 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4855+00:00] storage not found creating : /node/slice/capacity 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4855+00:00] storage not found creating : /node/slice/capacity/check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4855+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4855+00:00] Loaded Policy: slice.capacity.check 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4856+00:00] Processed policies_to_be_deployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4856+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4857+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f423618-4816-40d2-af95-c2f95c4a4e89","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"ab322ff8-d4fd-4165-9d6e-38ab6ebcd98d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784485","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | 2025/06/13 14:59:44 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.4857+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4857+00:00] 120000 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4858+00:00] New Ticker started with interval 120000 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4934+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f423618-4816-40d2-af95-c2f95c4a4e89","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"ab322ff8-d4fd-4165-9d6e-38ab6ebcd98d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784485","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4934+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.4934+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5233+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"408cff49-7a82-4c40-b611-4f2d9ab1965f","timestampMs":1749826784382,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5233+00:00] messageType: PDP_STATE_CHANGE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5234+00:00] PDP STATE CHANGE message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"408cff49-7a82-4c40-b611-4f2d9ab1965f","timestampMs":1749826784382,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5235+00:00] State change from PASSIVE To : ACTIVE 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.5235+00:00] Sending PDP Status With State Change response 15:03:05 policy-opa-pdp | 2025/06/13 14:59:44 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5236+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"408cff49-7a82-4c40-b611-4f2d9ab1965f","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"40d89689-3927-47b3-bcc3-20ecf5869e25","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784523","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.5236+00:00] PDP_STATUS With State Change Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5317+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"408cff49-7a82-4c40-b611-4f2d9ab1965f","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"40d89689-3927-47b3-bcc3-20ecf5869e25","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784523","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5317+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.5318+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8807+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d4991179-3f61-467b-b5fb-23820aa90cec","timestampMs":1749826784866,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8808+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8810+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d4991179-3f61-467b-b5fb-23820aa90cec","timestampMs":1749826784866,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.8811+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | 2025/06/13 14:59:44 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8811+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d4991179-3f61-467b-b5fb-23820aa90cec","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"98e35d33-c13c-4f19-ae67-ff182b7db59a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784881","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T14:59:44.8812+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8812+00:00] 120000 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8898+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d4991179-3f61-467b-b5fb-23820aa90cec","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"98e35d33-c13c-4f19-ae67-ff182b7db59a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784881","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8898+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T14:59:44.8899+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | 2025/06/13 15:00:43 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:43.7854+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"a5a8f0c1-0979-44c7-8dd8-2755e1d24f3f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826843785","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:43.7855+00:00] Sending Heartbeat ... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:43.7983+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"a5a8f0c1-0979-44c7-8dd8-2755e1d24f3f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826843785","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:43.7984+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:43.7984+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | WARN[2025-06-13T15:00:45.0673+00:00] Invalid or Missing Request ID 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:45.0673+00:00] Received Health Check message 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:45.0743+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:45.0744+00:00] datapath to get Data : / 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:45.0745+00:00] Json Data at /: {"node":{"slice":{"capacity":{"check":{"threshold":70}}}},"system":{"version":{"build_commit":"","build_hostname":"","build_timestamp":"","version":"1.1.0"}}} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4209+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"194dc0e9-80c2-4963-b95a-1439c0acd97d","timestampMs":1749826846361,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4210+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4211+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"194dc0e9-80c2-4963-b95a-1439c0acd97d","timestampMs":1749826846361,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4212+00:00] Check if Policy is Already Deployed: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4214+00:00] Policy is new and should be deployed: zoneB 1.0.6 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4215+00:00] Policy Is Allowed: zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4215+00:00] Validating properties data for policy: zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4215+00:00] Validating properties policy for policy: zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4215+00:00] Validation successful for policy: zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4216+00:00] Directory created: /opt/policies/zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4217+00:00] Policy file saved: /opt/policies/zoneB/policy.rego 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4218+00:00] Directory created: /opt/data/node/zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4219+00:00] Data file saved: /opt/data/node/zoneB/data.json 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4219+00:00] Before calling combinedoutput 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4542+00:00] Bundle Built Sucessfully.... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4597+00:00] storage not found creating : /node/zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4600+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.zoneB" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "zoneB" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "zoneB", 15:03:05 policy-opa-pdp | "policy-version": "1.0.6" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4601+00:00] Loaded Policy: zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4602+00:00] Processed policies_to_be_deployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4604+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | 2025/06/13 15:00:46 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4606+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"194dc0e9-80c2-4963-b95a-1439c0acd97d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"84effe89-684b-40f8-9d0c-053273f9619a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826846460","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:00:46.4608+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4609+00:00] 0 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4688+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"194dc0e9-80c2-4963-b95a-1439c0acd97d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"84effe89-684b-40f8-9d0c-053273f9619a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826846460","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4691+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:00:46.4691+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.6219+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6220+00:00] datapath to get Data : /node/zoneB/zone 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6222+00:00] Json Data at /node/zoneB/zone: {"zone_access_logs":[{"access":"granted","log_id":"log1","timestamp":"2024-11-01T09:00:00Z","user":"user1","zone_id":"zoneA"},{"access":"denied","log_id":"log2","timestamp":"2024-11-01T10:30:00Z","user":"user2","zone_id":"zoneA"},{"access":"granted","log_id":"log3","timestamp":"2024-11-01T11:00:00Z","user":"user3","zone_id":"zoneB"}]} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6327+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6327+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6331+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6332+00:00] SDK making a decision 15:03:05 policy-opa-pdp | {"decision_id":"dacc5397-7d5a-4ce6-862e-b0e014b71b1a","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":840,"timer_rego_query_compile_ns":153264,"timer_rego_query_eval_ns":526816,"timer_rego_query_parse_ns":118883,"timer_sdk_decision_eval_ns":972021},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-13T15:01:10Z","timestamp":"2025-06-13T15:01:10.633312455Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6349+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "dacc5397-7d5a-4ce6-862e-b0e014b71b1a", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_log_view": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "has_zone_access": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "access": "granted", 15:03:05 policy-opa-pdp | "user": "user1" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6434+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6435+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6438+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | WARN[2025-06-13T15:01:10.6439+00:00] Policy Name zoeB does not exist 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6518+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6519+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6521+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6522+00:00] SDK making a decision 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.6530+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "98df488a-6184-43e7-8f2d-7ca16a8bc045", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_log_view": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "has_zone_access": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "access": "granted", 15:03:05 policy-opa-pdp | {"decision_id":"98df488a-6184-43e7-8f2d-7ca16a8bc045","input":{"actions":["view"],"datatypes":["access","user"],"log_id":"log1","time_period":{"from":"2024-11-01T09:00:00Z","to":"2024-11-01T10:00:00Z"},"zone_id":"zoneA"},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":810,"timer_rego_query_eval_ns":447524,"timer_sdk_decision_eval_ns":590419},"msg":"Decision Log","nd_builtin_cache":null,"path":"zoneB","result":{"action_is_log_view":true,"allow":true,"has_zone_access":[{"access":"granted","user":"user1"}]},"time":"2025-06-13T15:01:10Z","timestamp":"2025-06-13T15:01:10.652260418Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | "user": "user1" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9823+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","timestampMs":1749826870952,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9824+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9826+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","timestampMs":1749826870952,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9826+00:00] Found Policies to be undeployed 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9826+00:00] Extracted Policy Name: zoneB, Version: 1.0.6 for undeployment 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9829+00:00] Deleting Policy from OPA : /zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9855+00:00] Removing policy directory: /opt/policies/zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9858+00:00] Deleting data from OPA : /node/zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9858+00:00] Analyzing dataPath: /node/zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9859+00:00] Path segments: [ node zoneB] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9859+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/zoneB 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9859+00:00] Removing data directory: /opt/data/node/zoneB 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9862+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9862+00:00] Policies Map After Undeployment : { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9862+00:00] Processed policies_to_be_undeployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9863+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | 2025/06/13 15:01:10 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9864+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"509cf10f-d836-4d21-a4e0-2823236d2ce3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826870986","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:10.9864+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9864+00:00] 0 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9934+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"509cf10f-d836-4d21-a4e0-2823236d2ce3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826870986","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9934+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:10.9934+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2323+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","timestampMs":1749826872211,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2323+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2324+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","timestampMs":1749826872211,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2325+00:00] Check if Policy is Already Deployed: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2325+00:00] Policy is new and should be deployed: vehicle 1.0.6 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2325+00:00] Policy Is Allowed: vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2325+00:00] Validating properties data for policy: vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2325+00:00] Validating properties policy for policy: vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2325+00:00] Validation successful for policy: vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2326+00:00] Directory created: /opt/policies/vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2327+00:00] Policy file saved: /opt/policies/vehicle/policy.rego 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2327+00:00] Directory created: /opt/data/node/vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2327+00:00] Data file saved: /opt/data/node/vehicle/data.json 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2328+00:00] Before calling combinedoutput 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2591+00:00] Bundle Built Sucessfully.... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2643+00:00] storage not found creating : /node/vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2644+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.vehicle" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | 2025/06/13 15:01:12 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "vehicle" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "vehicle", 15:03:05 policy-opa-pdp | "policy-version": "1.0.6" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2644+00:00] Loaded Policy: vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2645+00:00] Processed policies_to_be_deployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2645+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2646+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3ed0e407-b113-4d2c-a858-6366a51c9b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826872264","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:12.2646+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2647+00:00] 0 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2763+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3ed0e407-b113-4d2c-a858-6366a51c9b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826872264","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2764+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:12.2764+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3624+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3625+00:00] datapath to get Data : /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3626+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3724+00:00] PDP received a request to update data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3728+00:00] All fields are valid! 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3728+00:00] data : [map[op:add path:/round value:trail]] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3728+00:00] policy name : vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3728+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3729+00:00] dirParts : [ node vehicle] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3730+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3730+00:00] root: /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3730+00:00] path : round 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3730+00:00] calling ParsePatchPathEscaped to check the path 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3730+00:00] No path conflicts detected 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3731+00:00] Updated the data in the corresponding path successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3797+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3798+00:00] datapath to get Data : /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3799+00:00] Json Data at /node/vehicle: {"round":"trail","vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3893+00:00] PDP received a request to update data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3896+00:00] All fields are valid! 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3897+00:00] data : [map[op:replace path:/round value:%!s(float64=578)]] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3897+00:00] policy name : vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3898+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3898+00:00] dirParts : [ node vehicle] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3899+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3899+00:00] root: /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3900+00:00] path : round 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3900+00:00] calling ParsePatchPathEscaped to check the path 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3901+00:00] No path conflicts detected 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3901+00:00] Updated the data in the corresponding path successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.3970+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3970+00:00] datapath to get Data : /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.3970+00:00] Json Data at /node/vehicle: {"round":578,"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4064+00:00] PDP received a request to update data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4067+00:00] All fields are valid! 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4067+00:00] data : [map[op:remove path:/round]] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4067+00:00] policy name : vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4067+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0] map[data:[node.vehicle] policy:[vehicle] policy-id:vehicle policy-version:1.0.6]] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4067+00:00] dirParts : [ node vehicle] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4068+00:00] Matched policy: &{Data:[node.vehicle] Policy:[vehicle] PolicyID:vehicle PolicyVersion:1.0.6} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4068+00:00] root: /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4068+00:00] path : round 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4068+00:00] calling ParsePatchPathEscaped to check the path 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4068+00:00] No path conflicts detected 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4068+00:00] Updated the data in the corresponding path successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.4133+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4134+00:00] datapath to get Data : /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4134+00:00] Json Data at /node/vehicle: {"vehicles":[{"owner":"user1","status":"available","type":"car","vehicle_id":"v1"},{"owner":"user2","status":"in use","type":"bike","vehicle_id":"v2"}]} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4226+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4227+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4229+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4229+00:00] SDK making a decision 15:03:05 policy-opa-pdp | {"decision_id":"1775bef2-636b-4149-bf12-3ff6eb34bf8d","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":410,"timer_rego_query_compile_ns":80222,"timer_rego_query_eval_ns":234008,"timer_rego_query_parse_ns":76923,"timer_sdk_decision_eval_ns":488626},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-13T15:01:35Z","timestamp":"2025-06-13T15:01:35.423091729Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4238+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "1775bef2-636b-4149-bf12-3ff6eb34bf8d", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_granted": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "user_has_vehicle_access": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "status": "available", 15:03:05 policy-opa-pdp | "type": "car" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4312+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4313+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4316+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | WARN[2025-06-13T15:01:35.4316+00:00] Policy Name vehile does not exist 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4379+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4380+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4383+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4384+00:00] SDK making a decision 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.4393+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "98020f25-8d51-438f-a1e7-2b3c78c8e712", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_granted": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "user_has_vehicle_access": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "status": "available", 15:03:05 policy-opa-pdp | "type": "car" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | {"decision_id":"98020f25-8d51-438f-a1e7-2b3c78c8e712","input":{"actions":["use"],"attributes":["type","status"],"user":"user1","vehicle_id":"v1"},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":880,"timer_rego_query_eval_ns":534407,"timer_sdk_decision_eval_ns":641430},"msg":"Decision Log","nd_builtin_cache":null,"path":"vehicle","result":{"action_is_granted":true,"allow":true,"user_has_vehicle_access":[{"status":"available","type":"car"}]},"time":"2025-06-13T15:01:35Z","timestamp":"2025-06-13T15:01:35.438516415Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7115+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"045d2699-9d86-4a83-9873-ac3266bc9f6a","timestampMs":1749826895691,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7122+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7124+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"045d2699-9d86-4a83-9873-ac3266bc9f6a","timestampMs":1749826895691,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7124+00:00] Found Policies to be undeployed 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7127+00:00] Extracted Policy Name: vehicle, Version: 1.0.6 for undeployment 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7129+00:00] Deleting Policy from OPA : /vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7156+00:00] Removing policy directory: /opt/policies/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7159+00:00] Deleting data from OPA : /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7159+00:00] Analyzing dataPath: /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7160+00:00] Path segments: [ node vehicle] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7160+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7160+00:00] Removing data directory: /opt/data/node/vehicle 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7163+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7163+00:00] Policies Map After Undeployment : { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7166+00:00] Processed policies_to_be_undeployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7168+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | 2025/06/13 15:01:35 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7171+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"045d2699-9d86-4a83-9873-ac3266bc9f6a","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"e7fb1f7d-5271-4e89-b9da-fb8f6bac28f0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826895716","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:35.7171+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7172+00:00] 0 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7241+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"045d2699-9d86-4a83-9873-ac3266bc9f6a","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"e7fb1f7d-5271-4e89-b9da-fb8f6bac28f0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826895716","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7243+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:35.7243+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.1265+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.1266+00:00] datapath to get Data : /node/vehicle 15:03:05 policy-opa-pdp | WARN[2025-06-13T15:01:36.1266+00:00] Error in reading data under /node/vehicle path 15:03:05 policy-opa-pdp | ERRO[2025-06-13T15:01:36.1266+00:00] Error in getting data - storage_not_found_error: /node/vehicle: document does not exist 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.1378+00:00] PDP received a request to update data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.1381+00:00] All fields are valid! 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.1382+00:00] data : [map[op:remove path:/round]] 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.1382+00:00] policy name : vehicle 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.1382+00:00] deployedPolicies [map[data:[node.slice.capacity.check] policy:[slice.capacity.check] policy-id:slice.capacity.check policy-version:1.0.0]] 15:03:05 policy-opa-pdp | ERRO[2025-06-13T15:01:36.1382+00:00] Policy associated with the patch request does not exists 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8480+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","timestampMs":1749826896827,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8482+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8485+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","timestampMs":1749826896827,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8485+00:00] Check if Policy is Already Deployed: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8486+00:00] Policy is new and should be deployed: abac 1.0.7 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8486+00:00] Policy Is Allowed: abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8486+00:00] Validating properties data for policy: abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8486+00:00] Validating properties policy for policy: abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8486+00:00] Validation successful for policy: abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8488+00:00] Directory created: /opt/policies/abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8489+00:00] Policy file saved: /opt/policies/abac/policy.rego 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8490+00:00] Directory created: /opt/data/node/abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8491+00:00] Data file saved: /opt/data/node/abac/data.json 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8491+00:00] Before calling combinedoutput 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8687+00:00] Bundle Built Sucessfully.... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8742+00:00] storage not found creating : /node/abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8743+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.abac" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "abac" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "abac", 15:03:05 policy-opa-pdp | "policy-version": "1.0.7" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8744+00:00] Loaded Policy: abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8744+00:00] Processed policies_to_be_deployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8744+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8745+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3b38f5fd-a98a-4a06-88dc-dc820557aa5c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826896874","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:01:36.8745+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8745+00:00] 0 15:03:05 policy-opa-pdp | 2025/06/13 15:01:36 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8815+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3b38f5fd-a98a-4a06-88dc-dc820557aa5c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826896874","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8816+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:36.8816+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | 2025/06/13 15:01:44 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:44.4893+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"656f0a92-2e3a-4d00-87c1-0613f8933b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826904489","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:44.4894+00:00] Sending Heartbeat ... 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:44.4972+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"656f0a92-2e3a-4d00-87c1-0613f8933b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826904489","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:44.4973+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:01:44.4973+00:00] discarding event of type PDP_STATUS 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:00.9094+00:00] PDP received a request to get data through API 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9095+00:00] datapath to get Data : /node/abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9096+00:00] Json Data at /node/abac: {"sensor_data":[{"humidity":"40%","id":"0001","location":"Sri Lanka","particle_density":"1.3 g/l","precipitation":"1000 mm","temperature":"28 C","timestamp":"2024-02-26","windspeed":"5.5 m/s"},{"humidity":"45%","id":"0002","location":"Colombo","particle_density":"1.5 g/l","precipitation":"1200 mm","temperature":"30 C","timestamp":"2024-02-26","windspeed":"6.0 m/s"},{"humidity":"60%","id":"0003","location":"Kandy","particle_density":"1.1 g/l","precipitation":"800 mm","temperature":"25 C","timestamp":"2024-02-26","windspeed":"4.5 m/s"},{"humidity":"30%","id":"0004","location":"Galle","particle_density":"1.8 g/l","precipitation":"500 mm","temperature":"35 C","timestamp":"2024-02-27","windspeed":"7.2 m/s"},{"humidity":"20%","id":"0005","location":"Jaffna","particle_density":"0.9 g/l","precipitation":"300 mm","temperature":"-5 C","timestamp":"2024-02-27","windspeed":"3.8 m/s"},{"humidity":"55%","id":"0006","location":"Trincomalee","particle_density":"1.2 g/l","precipitation":"1000 mm","temperature":"20 C","timestamp":"2024-02-28","windspeed":"5.0 m/s"},{"humidity":"50%","id":"0007","location":"Nuwara Eliya","particle_density":"1.3 g/l","precipitation":"600 mm","temperature":"25 C","timestamp":"2024-02-28","windspeed":"4.0 m/s"},{"humidity":"40%","id":"0008","location":"Anuradhapura","particle_density":"1.4 g/l","precipitation":"700 mm","temperature":"28 C","timestamp":"2024-02-29","windspeed":"5.8 m/s"},{"humidity":"65%","id":"0009","location":"Matara","particle_density":"1.6 g/l","precipitation":"900 mm","temperature":"32 C","timestamp":"2024-02-29","windspeed":"6.5 m/s"}]} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9204+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9205+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9214+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9214+00:00] SDK making a decision 15:03:05 policy-opa-pdp | {"decision_id":"6edd75ad-b4c3-4f5e-98d1-51dff0a16e75","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":431,"timer_rego_query_compile_ns":81943,"timer_rego_query_eval_ns":516867,"timer_rego_query_parse_ns":61212,"timer_sdk_decision_eval_ns":817056},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-13T15:02:00Z","timestamp":"2025-06-13T15:02:00.921553228Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9226+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "6edd75ad-b4c3-4f5e-98d1-51dff0a16e75", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_read": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "viewable_sensor_data": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Galle", 15:03:05 policy-opa-pdp | "precipitation": "500 mm", 15:03:05 policy-opa-pdp | "temperature": "35 C", 15:03:05 policy-opa-pdp | "windspeed": "7.2 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Jaffna", 15:03:05 policy-opa-pdp | "precipitation": "300 mm", 15:03:05 policy-opa-pdp | "temperature": "-5 C", 15:03:05 policy-opa-pdp | "windspeed": "3.8 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Nuwara Eliya", 15:03:05 policy-opa-pdp | "precipitation": "600 mm", 15:03:05 policy-opa-pdp | "temperature": "25 C", 15:03:05 policy-opa-pdp | "windspeed": "4.0 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Trincomalee", 15:03:05 policy-opa-pdp | "precipitation": "1000 mm", 15:03:05 policy-opa-pdp | "temperature": "20 C", 15:03:05 policy-opa-pdp | "windspeed": "5.0 m/s" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9302+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9303+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9305+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | WARN[2025-06-13T15:02:00.9306+00:00] Policy Name abc does not exist 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9393+00:00] PDP received a decision request. 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9394+00:00] Headers processed for requestId: Unknown 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9396+00:00] Validation successful for request fields 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9399+00:00] SDK making a decision 15:03:05 policy-opa-pdp | {"decision_id":"67b41d01-cc4b-4674-be13-37db0dded305","input":{"actions":["read"],"datatypes":["location","temperature","precipitation","windspeed"],"time_period":{"from":"2024-02-27","to":"2024-02-29"}},"labels":{"id":"ee301e8e-ddf2-4a72-b955-5d597668a9ee","version":"1.1.0"},"level":"info","metrics":{"timer_rego_external_resolve_ns":560,"timer_rego_query_eval_ns":675032,"timer_sdk_decision_eval_ns":862048},"msg":"Decision Log","nd_builtin_cache":null,"path":"abac","result":{"action_is_read":true,"allow":true,"viewable_sensor_data":[{"location":"Galle","precipitation":"500 mm","temperature":"35 C","windspeed":"7.2 m/s"},{"location":"Jaffna","precipitation":"300 mm","temperature":"-5 C","windspeed":"3.8 m/s"},{"location":"Nuwara Eliya","precipitation":"600 mm","temperature":"25 C","windspeed":"4.0 m/s"},{"location":"Trincomalee","precipitation":"1000 mm","temperature":"20 C","windspeed":"5.0 m/s"}]},"time":"2025-06-13T15:02:00Z","timestamp":"2025-06-13T15:02:00.940027827Z","type":"openpolicyagent.org/decision_logs"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:00.9410+00:00] RAW opa Decision output: 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "ID": "67b41d01-cc4b-4674-be13-37db0dded305", 15:03:05 policy-opa-pdp | "Result": { 15:03:05 policy-opa-pdp | "action_is_read": true, 15:03:05 policy-opa-pdp | "allow": true, 15:03:05 policy-opa-pdp | "viewable_sensor_data": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Galle", 15:03:05 policy-opa-pdp | "precipitation": "500 mm", 15:03:05 policy-opa-pdp | "temperature": "35 C", 15:03:05 policy-opa-pdp | "windspeed": "7.2 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Jaffna", 15:03:05 policy-opa-pdp | "precipitation": "300 mm", 15:03:05 policy-opa-pdp | "temperature": "-5 C", 15:03:05 policy-opa-pdp | "windspeed": "3.8 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Nuwara Eliya", 15:03:05 policy-opa-pdp | "precipitation": "600 mm", 15:03:05 policy-opa-pdp | "temperature": "25 C", 15:03:05 policy-opa-pdp | "windspeed": "4.0 m/s" 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "location": "Trincomalee", 15:03:05 policy-opa-pdp | "precipitation": "1000 mm", 15:03:05 policy-opa-pdp | "temperature": "20 C", 15:03:05 policy-opa-pdp | "windspeed": "5.0 m/s" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | }, 15:03:05 policy-opa-pdp | "Provenance": { 15:03:05 policy-opa-pdp | "version": "1.1.0", 15:03:05 policy-opa-pdp | "build_commit": "", 15:03:05 policy-opa-pdp | "build_timestamp": "", 15:03:05 policy-opa-pdp | "build_hostname": "" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5355+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"ed3139d6-f3ac-4406-bc91-e8a08d677771","timestampMs":1749826921507,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5356+00:00] messageType: PDP_UPDATE 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5357+00:00] PDP_UPDATE Message received: {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"ed3139d6-f3ac-4406-bc91-e8a08d677771","timestampMs":1749826921507,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5358+00:00] Found Policies to be undeployed 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5358+00:00] Extracted Policy Name: abac, Version: 1.0.7 for undeployment 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5358+00:00] Deleting Policy from OPA : /abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5385+00:00] Removing policy directory: /opt/policies/abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5388+00:00] Deleting data from OPA : /node/abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5388+00:00] Analyzing dataPath: /node/abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5389+00:00] Path segments: [ node abac] 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5389+00:00] Path doesn't have any parent-child hierarchy;so returning the original path: /node/abac 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5389+00:00] Removing data directory: /opt/data/node/abac 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5391+00:00] PoliciesDeployed Map: { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5392+00:00] Policies Map After Undeployment : { 15:03:05 policy-opa-pdp | "deployed_policies_dict": [ 15:03:05 policy-opa-pdp | { 15:03:05 policy-opa-pdp | "data": [ 15:03:05 policy-opa-pdp | "node.slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy": [ 15:03:05 policy-opa-pdp | "slice.capacity.check" 15:03:05 policy-opa-pdp | ], 15:03:05 policy-opa-pdp | "policy-id": "slice.capacity.check", 15:03:05 policy-opa-pdp | "policy-version": "1.0.0" 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | ] 15:03:05 policy-opa-pdp | } 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5392+00:00] Processed policies_to_be_undeployed successfully 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5392+00:00] Sending PDP Status With Update Response 15:03:05 policy-opa-pdp | 2025/06/13 15:02:01 KafkaProducer or producer produce message 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5397+00:00] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ed3139d6-f3ac-4406-bc91-e8a08d677771","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"daf5328f-bc9a-4695-acd7-84c1240b5f8d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826921539","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | INFO[2025-06-13T15:02:01.5397+00:00] PDP_STATUS Message Sent Successfully 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5397+00:00] 0 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5480+00:00] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-opa-pdp | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ed3139d6-f3ac-4406-bc91-e8a08d677771","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"daf5328f-bc9a-4695-acd7-84c1240b5f8d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826921539","deploymentInstanceInfo":""} 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5483+00:00] messageType: PDP_STATUS 15:03:05 policy-opa-pdp | DEBU[2025-06-13T15:02:01.5483+00:00] discarding event of type PDP_STATUS 15:03:05 policy-pap | Waiting for api port 6969... 15:03:05 policy-pap | api (172.17.0.8:6969) open 15:03:05 policy-pap | Waiting for kafka port 9092... 15:03:05 policy-pap | kafka (172.17.0.7:9092) open 15:03:05 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 15:03:05 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 15:03:05 policy-pap | 15:03:05 policy-pap | . ____ _ __ _ _ 15:03:05 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 15:03:05 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 15:03:05 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 15:03:05 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 15:03:05 policy-pap | =========|_|==============|___/=/_/_/_/ 15:03:05 policy-pap | 15:03:05 policy-pap | :: Spring Boot :: (v3.4.6) 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:38.331+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.15 with PID 97 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 15:03:05 policy-pap | [2025-06-13T14:57:38.333+00:00|INFO|PolicyPapApplication|main] The following 1 profile is active: "default" 15:03:05 policy-pap | [2025-06-13T14:57:39.931+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 15:03:05 policy-pap | [2025-06-13T14:57:40.042+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 97 ms. Found 7 JPA repository interfaces. 15:03:05 policy-pap | [2025-06-13T14:57:41.209+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port 6969 (http) 15:03:05 policy-pap | [2025-06-13T14:57:41.224+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 15:03:05 policy-pap | [2025-06-13T14:57:41.226+00:00|INFO|StandardService|main] Starting service [Tomcat] 15:03:05 policy-pap | [2025-06-13T14:57:41.227+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.41] 15:03:05 policy-pap | [2025-06-13T14:57:41.292+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 15:03:05 policy-pap | [2025-06-13T14:57:41.292+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2888 ms 15:03:05 policy-pap | [2025-06-13T14:57:41.798+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 15:03:05 policy-pap | [2025-06-13T14:57:41.883+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.6.16.Final 15:03:05 policy-pap | [2025-06-13T14:57:41.929+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 15:03:05 policy-pap | [2025-06-13T14:57:42.398+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 15:03:05 policy-pap | [2025-06-13T14:57:42.447+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 15:03:05 policy-pap | [2025-06-13T14:57:42.676+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.postgresql.jdbc.PgConnection@53a16dd6 15:03:05 policy-pap | [2025-06-13T14:57:42.678+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 15:03:05 policy-pap | [2025-06-13T14:57:42.780+00:00|INFO|pooling|main] HHH10001005: Database info: 15:03:05 policy-pap | Database JDBC URL [Connecting through datasource 'HikariDataSource (HikariPool-1)'] 15:03:05 policy-pap | Database driver: undefined/unknown 15:03:05 policy-pap | Database version: 16.4 15:03:05 policy-pap | Autocommit mode: undefined/unknown 15:03:05 policy-pap | Isolation level: undefined/unknown 15:03:05 policy-pap | Minimum pool size: undefined/unknown 15:03:05 policy-pap | Maximum pool size: undefined/unknown 15:03:05 policy-pap | [2025-06-13T14:57:44.867+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 15:03:05 policy-pap | [2025-06-13T14:57:44.871+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 15:03:05 policy-pap | [2025-06-13T14:57:46.103+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:03:05 policy-pap | allow.auto.create.topics = true 15:03:05 policy-pap | auto.commit.interval.ms = 5000 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | auto.offset.reset = latest 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | check.crcs = true 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-1 15:03:05 policy-pap | client.rack = 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | default.api.timeout.ms = 60000 15:03:05 policy-pap | enable.auto.commit = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | exclude.internal.topics = true 15:03:05 policy-pap | fetch.max.bytes = 52428800 15:03:05 policy-pap | fetch.max.wait.ms = 500 15:03:05 policy-pap | fetch.min.bytes = 1 15:03:05 policy-pap | group.id = fddb7771-28a4-4343-b8d7-b4045b0e6dfb 15:03:05 policy-pap | group.instance.id = null 15:03:05 policy-pap | group.protocol = classic 15:03:05 policy-pap | group.remote.assignor = null 15:03:05 policy-pap | heartbeat.interval.ms = 3000 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | internal.leave.group.on.close = true 15:03:05 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:03:05 policy-pap | isolation.level = read_uncommitted 15:03:05 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | max.partition.fetch.bytes = 1048576 15:03:05 policy-pap | max.poll.interval.ms = 300000 15:03:05 policy-pap | max.poll.records = 500 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:03:05 policy-pap | receive.buffer.bytes = 65536 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | session.timeout.ms = 45000 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:46.159+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:46.313+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:46.314+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:46.314+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826666312 15:03:05 policy-pap | [2025-06-13T14:57:46.316+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-1, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Subscribed to topic(s): policy-pdp-pap 15:03:05 policy-pap | [2025-06-13T14:57:46.316+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:03:05 policy-pap | allow.auto.create.topics = true 15:03:05 policy-pap | auto.commit.interval.ms = 5000 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | auto.offset.reset = latest 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | check.crcs = true 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = consumer-policy-pap-2 15:03:05 policy-pap | client.rack = 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | default.api.timeout.ms = 60000 15:03:05 policy-pap | enable.auto.commit = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | exclude.internal.topics = true 15:03:05 policy-pap | fetch.max.bytes = 52428800 15:03:05 policy-pap | fetch.max.wait.ms = 500 15:03:05 policy-pap | fetch.min.bytes = 1 15:03:05 policy-pap | group.id = policy-pap 15:03:05 policy-pap | group.instance.id = null 15:03:05 policy-pap | group.protocol = classic 15:03:05 policy-pap | group.remote.assignor = null 15:03:05 policy-pap | heartbeat.interval.ms = 3000 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | internal.leave.group.on.close = true 15:03:05 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:03:05 policy-pap | isolation.level = read_uncommitted 15:03:05 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | max.partition.fetch.bytes = 1048576 15:03:05 policy-pap | max.poll.interval.ms = 300000 15:03:05 policy-pap | max.poll.records = 500 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:03:05 policy-pap | receive.buffer.bytes = 65536 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | session.timeout.ms = 45000 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:46.317+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:46.324+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:46.324+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:46.324+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826666324 15:03:05 policy-pap | [2025-06-13T14:57:46.324+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 15:03:05 policy-pap | [2025-06-13T14:57:46.664+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=opaGroup, description=null, pdpGroupState=ACTIVE, properties={}, pdpSubgroups=[PdpSubGroup(pdpType=opa, supportedPolicyTypes=[onap.policies.native.opa 1.0.0], policies=[slice.capacity.check 1.0.0], currentInstanceCount=0, desiredInstanceCount=1, properties={}, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 15:03:05 policy-pap | [2025-06-13T14:57:46.786+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 15:03:05 policy-pap | [2025-06-13T14:57:46.871+00:00|INFO|InitializeUserDetailsBeanManagerConfigurer$InitializeUserDetailsManagerConfigurer|main] Global AuthenticationManager configured with UserDetailsService bean with name inMemoryUserDetailsManager 15:03:05 policy-pap | [2025-06-13T14:57:47.086+00:00|INFO|OptionalValidatorFactoryBean|main] Failed to set up a Bean Validation provider: jakarta.validation.NoProviderFoundException: Unable to create a Configuration, because no Jakarta Validation provider could be found. Add a provider like Hibernate Validator (RI) to your classpath. 15:03:05 policy-pap | [2025-06-13T14:57:47.891+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoints beneath base path '' 15:03:05 policy-pap | [2025-06-13T14:57:48.026+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 15:03:05 policy-pap | [2025-06-13T14:57:48.057+00:00|INFO|TomcatWebServer|main] Tomcat started on port 6969 (http) with context path '/policy/pap/v1' 15:03:05 policy-pap | [2025-06-13T14:57:48.080+00:00|INFO|ServiceManager|main] Policy PAP starting 15:03:05 policy-pap | [2025-06-13T14:57:48.080+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 15:03:05 policy-pap | [2025-06-13T14:57:48.081+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 15:03:05 policy-pap | [2025-06-13T14:57:48.082+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 15:03:05 policy-pap | [2025-06-13T14:57:48.082+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 15:03:05 policy-pap | [2025-06-13T14:57:48.082+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 15:03:05 policy-pap | [2025-06-13T14:57:48.082+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 15:03:05 policy-pap | [2025-06-13T14:57:48.084+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddb7771-28a4-4343-b8d7-b4045b0e6dfb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@127350d8 15:03:05 policy-pap | [2025-06-13T14:57:48.096+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddb7771-28a4-4343-b8d7-b4045b0e6dfb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.096+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:03:05 policy-pap | allow.auto.create.topics = true 15:03:05 policy-pap | auto.commit.interval.ms = 5000 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | auto.offset.reset = latest 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | check.crcs = true 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3 15:03:05 policy-pap | client.rack = 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | default.api.timeout.ms = 60000 15:03:05 policy-pap | enable.auto.commit = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | exclude.internal.topics = true 15:03:05 policy-pap | fetch.max.bytes = 52428800 15:03:05 policy-pap | fetch.max.wait.ms = 500 15:03:05 policy-pap | fetch.min.bytes = 1 15:03:05 policy-pap | group.id = fddb7771-28a4-4343-b8d7-b4045b0e6dfb 15:03:05 policy-pap | group.instance.id = null 15:03:05 policy-pap | group.protocol = classic 15:03:05 policy-pap | group.remote.assignor = null 15:03:05 policy-pap | heartbeat.interval.ms = 3000 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | internal.leave.group.on.close = true 15:03:05 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:03:05 policy-pap | isolation.level = read_uncommitted 15:03:05 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | max.partition.fetch.bytes = 1048576 15:03:05 policy-pap | max.poll.interval.ms = 300000 15:03:05 policy-pap | max.poll.records = 500 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:03:05 policy-pap | receive.buffer.bytes = 65536 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | session.timeout.ms = 45000 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:48.097+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:48.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:48.104+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:48.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826668104 15:03:05 policy-pap | [2025-06-13T14:57:48.104+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Subscribed to topic(s): policy-pdp-pap 15:03:05 policy-pap | [2025-06-13T14:57:48.105+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 15:03:05 policy-pap | [2025-06-13T14:57:48.105+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5067318c-f13b-40ab-8aff-79ec5a4018df, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@49e3bd37 15:03:05 policy-pap | [2025-06-13T14:57:48.105+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5067318c-f13b-40ab-8aff-79ec5a4018df, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.106+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 15:03:05 policy-pap | allow.auto.create.topics = true 15:03:05 policy-pap | auto.commit.interval.ms = 5000 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | auto.offset.reset = latest 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | check.crcs = true 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = consumer-policy-pap-4 15:03:05 policy-pap | client.rack = 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | default.api.timeout.ms = 60000 15:03:05 policy-pap | enable.auto.commit = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | exclude.internal.topics = true 15:03:05 policy-pap | fetch.max.bytes = 52428800 15:03:05 policy-pap | fetch.max.wait.ms = 500 15:03:05 policy-pap | fetch.min.bytes = 1 15:03:05 policy-pap | group.id = policy-pap 15:03:05 policy-pap | group.instance.id = null 15:03:05 policy-pap | group.protocol = classic 15:03:05 policy-pap | group.remote.assignor = null 15:03:05 policy-pap | heartbeat.interval.ms = 3000 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | internal.leave.group.on.close = true 15:03:05 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 15:03:05 policy-pap | isolation.level = read_uncommitted 15:03:05 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | max.partition.fetch.bytes = 1048576 15:03:05 policy-pap | max.poll.interval.ms = 300000 15:03:05 policy-pap | max.poll.records = 500 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 15:03:05 policy-pap | receive.buffer.bytes = 65536 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | session.timeout.ms = 45000 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:48.106+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:48.112+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:48.112+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:48.112+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826668112 15:03:05 policy-pap | [2025-06-13T14:57:48.112+00:00|INFO|ClassicKafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 15:03:05 policy-pap | [2025-06-13T14:57:48.113+00:00|INFO|ServiceManager|main] Policy PAP starting topics 15:03:05 policy-pap | [2025-06-13T14:57:48.113+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=5067318c-f13b-40ab-8aff-79ec5a4018df, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.113+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fddb7771-28a4-4343-b8d7-b4045b0e6dfb, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.113+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9ed66539-2aaf-46b3-bfe9-118189f8c8b1, alive=false, publisher=null]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.127+00:00|INFO|ProducerConfig|main] ProducerConfig values: 15:03:05 policy-pap | acks = -1 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | batch.size = 16384 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | buffer.memory = 33554432 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = producer-1 15:03:05 policy-pap | compression.gzip.level = -1 15:03:05 policy-pap | compression.lz4.level = 9 15:03:05 policy-pap | compression.type = none 15:03:05 policy-pap | compression.zstd.level = 3 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | delivery.timeout.ms = 120000 15:03:05 policy-pap | enable.idempotence = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:03:05 policy-pap | linger.ms = 0 15:03:05 policy-pap | max.block.ms = 60000 15:03:05 policy-pap | max.in.flight.requests.per.connection = 5 15:03:05 policy-pap | max.request.size = 1048576 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.max.idle.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partitioner.adaptive.partitioning.enable = true 15:03:05 policy-pap | partitioner.availability.timeout.ms = 0 15:03:05 policy-pap | partitioner.class = null 15:03:05 policy-pap | partitioner.ignore.keys = false 15:03:05 policy-pap | receive.buffer.bytes = 32768 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retries = 2147483647 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | transaction.timeout.ms = 60000 15:03:05 policy-pap | transactional.id = null 15:03:05 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:48.128+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:48.144+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 15:03:05 policy-pap | [2025-06-13T14:57:48.162+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:48.162+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:48.162+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826668162 15:03:05 policy-pap | [2025-06-13T14:57:48.162+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9ed66539-2aaf-46b3-bfe9-118189f8c8b1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 15:03:05 policy-pap | [2025-06-13T14:57:48.162+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1be9ef40-eb1d-412d-9a8a-888908121419, alive=false, publisher=null]]: starting 15:03:05 policy-pap | [2025-06-13T14:57:48.163+00:00|INFO|ProducerConfig|main] ProducerConfig values: 15:03:05 policy-pap | acks = -1 15:03:05 policy-pap | auto.include.jmx.reporter = true 15:03:05 policy-pap | batch.size = 16384 15:03:05 policy-pap | bootstrap.servers = [kafka:9092] 15:03:05 policy-pap | buffer.memory = 33554432 15:03:05 policy-pap | client.dns.lookup = use_all_dns_ips 15:03:05 policy-pap | client.id = producer-2 15:03:05 policy-pap | compression.gzip.level = -1 15:03:05 policy-pap | compression.lz4.level = 9 15:03:05 policy-pap | compression.type = none 15:03:05 policy-pap | compression.zstd.level = 3 15:03:05 policy-pap | connections.max.idle.ms = 540000 15:03:05 policy-pap | delivery.timeout.ms = 120000 15:03:05 policy-pap | enable.idempotence = true 15:03:05 policy-pap | enable.metrics.push = true 15:03:05 policy-pap | interceptor.classes = [] 15:03:05 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:03:05 policy-pap | linger.ms = 0 15:03:05 policy-pap | max.block.ms = 60000 15:03:05 policy-pap | max.in.flight.requests.per.connection = 5 15:03:05 policy-pap | max.request.size = 1048576 15:03:05 policy-pap | metadata.max.age.ms = 300000 15:03:05 policy-pap | metadata.max.idle.ms = 300000 15:03:05 policy-pap | metadata.recovery.strategy = none 15:03:05 policy-pap | metric.reporters = [] 15:03:05 policy-pap | metrics.num.samples = 2 15:03:05 policy-pap | metrics.recording.level = INFO 15:03:05 policy-pap | metrics.sample.window.ms = 30000 15:03:05 policy-pap | partitioner.adaptive.partitioning.enable = true 15:03:05 policy-pap | partitioner.availability.timeout.ms = 0 15:03:05 policy-pap | partitioner.class = null 15:03:05 policy-pap | partitioner.ignore.keys = false 15:03:05 policy-pap | receive.buffer.bytes = 32768 15:03:05 policy-pap | reconnect.backoff.max.ms = 1000 15:03:05 policy-pap | reconnect.backoff.ms = 50 15:03:05 policy-pap | request.timeout.ms = 30000 15:03:05 policy-pap | retries = 2147483647 15:03:05 policy-pap | retry.backoff.max.ms = 1000 15:03:05 policy-pap | retry.backoff.ms = 100 15:03:05 policy-pap | sasl.client.callback.handler.class = null 15:03:05 policy-pap | sasl.jaas.config = null 15:03:05 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 15:03:05 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 15:03:05 policy-pap | sasl.kerberos.service.name = null 15:03:05 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 15:03:05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 15:03:05 policy-pap | sasl.login.callback.handler.class = null 15:03:05 policy-pap | sasl.login.class = null 15:03:05 policy-pap | sasl.login.connect.timeout.ms = null 15:03:05 policy-pap | sasl.login.read.timeout.ms = null 15:03:05 policy-pap | sasl.login.refresh.buffer.seconds = 300 15:03:05 policy-pap | sasl.login.refresh.min.period.seconds = 60 15:03:05 policy-pap | sasl.login.refresh.window.factor = 0.8 15:03:05 policy-pap | sasl.login.refresh.window.jitter = 0.05 15:03:05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.login.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.mechanism = GSSAPI 15:03:05 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 15:03:05 policy-pap | sasl.oauthbearer.expected.audience = null 15:03:05 policy-pap | sasl.oauthbearer.expected.issuer = null 15:03:05 policy-pap | sasl.oauthbearer.header.urlencode = false 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 15:03:05 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 15:03:05 policy-pap | sasl.oauthbearer.scope.claim.name = scope 15:03:05 policy-pap | sasl.oauthbearer.sub.claim.name = sub 15:03:05 policy-pap | sasl.oauthbearer.token.endpoint.url = null 15:03:05 policy-pap | security.protocol = PLAINTEXT 15:03:05 policy-pap | security.providers = null 15:03:05 policy-pap | send.buffer.bytes = 131072 15:03:05 policy-pap | socket.connection.setup.timeout.max.ms = 30000 15:03:05 policy-pap | socket.connection.setup.timeout.ms = 10000 15:03:05 policy-pap | ssl.cipher.suites = null 15:03:05 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 15:03:05 policy-pap | ssl.endpoint.identification.algorithm = https 15:03:05 policy-pap | ssl.engine.factory.class = null 15:03:05 policy-pap | ssl.key.password = null 15:03:05 policy-pap | ssl.keymanager.algorithm = SunX509 15:03:05 policy-pap | ssl.keystore.certificate.chain = null 15:03:05 policy-pap | ssl.keystore.key = null 15:03:05 policy-pap | ssl.keystore.location = null 15:03:05 policy-pap | ssl.keystore.password = null 15:03:05 policy-pap | ssl.keystore.type = JKS 15:03:05 policy-pap | ssl.protocol = TLSv1.3 15:03:05 policy-pap | ssl.provider = null 15:03:05 policy-pap | ssl.secure.random.implementation = null 15:03:05 policy-pap | ssl.trustmanager.algorithm = PKIX 15:03:05 policy-pap | ssl.truststore.certificates = null 15:03:05 policy-pap | ssl.truststore.location = null 15:03:05 policy-pap | ssl.truststore.password = null 15:03:05 policy-pap | ssl.truststore.type = JKS 15:03:05 policy-pap | transaction.timeout.ms = 60000 15:03:05 policy-pap | transactional.id = null 15:03:05 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 15:03:05 policy-pap | 15:03:05 policy-pap | [2025-06-13T14:57:48.163+00:00|INFO|KafkaMetricsCollector|main] initializing Kafka metrics collector 15:03:05 policy-pap | [2025-06-13T14:57:48.163+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|AppInfoParser|main] Kafka version: 3.9.1 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|AppInfoParser|main] Kafka commitId: f745dfdcee2b9851 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1749826668168 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=1be9ef40-eb1d-412d-9a8a-888908121419, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 15:03:05 policy-pap | [2025-06-13T14:57:48.168+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 15:03:05 policy-pap | [2025-06-13T14:57:48.171+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 15:03:05 policy-pap | [2025-06-13T14:57:48.171+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 15:03:05 policy-pap | [2025-06-13T14:57:48.173+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 15:03:05 policy-pap | [2025-06-13T14:57:48.173+00:00|INFO|TimerManager|Thread-9] timer manager update started 15:03:05 policy-pap | [2025-06-13T14:57:48.174+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 15:03:05 policy-pap | [2025-06-13T14:57:48.174+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 15:03:05 policy-pap | [2025-06-13T14:57:48.175+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 15:03:05 policy-pap | [2025-06-13T14:57:48.179+00:00|INFO|ServiceManager|main] Policy PAP started 15:03:05 policy-pap | [2025-06-13T14:57:48.179+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.808 seconds (process running for 11.401) 15:03:05 policy-pap | [2025-06-13T14:57:48.182+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 15:03:05 policy-pap | [2025-06-13T14:57:48.655+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: joYG5LnGQiS1GzzhjdPKfA 15:03:05 policy-pap | [2025-06-13T14:57:48.656+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 15:03:05 policy-pap | [2025-06-13T14:57:48.657+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Cluster ID: joYG5LnGQiS1GzzhjdPKfA 15:03:05 policy-pap | [2025-06-13T14:57:48.658+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: joYG5LnGQiS1GzzhjdPKfA 15:03:05 policy-pap | [2025-06-13T14:57:48.709+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 15:03:05 policy-pap | [2025-06-13T14:57:48.709+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 15:03:05 policy-pap | [2025-06-13T14:57:48.797+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 3 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:48.797+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: joYG5LnGQiS1GzzhjdPKfA 15:03:05 policy-pap | [2025-06-13T14:57:48.915+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 15:03:05 policy-pap | [2025-06-13T14:57:48.928+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:49.181+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 15:03:05 policy-pap | [2025-06-13T14:57:49.196+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:49.584+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:49.681+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:50.462+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:50.516+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:51.461+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:51.482+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:52.404+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] The metadata response from the cluster reported a recoverable issue with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:52.498+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:53.336+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] The metadata response from the cluster reported a recoverable issue with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:57:53.428+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 15:03:05 policy-pap | [2025-06-13T14:57:53.436+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] (Re-)joining group 15:03:05 policy-pap | [2025-06-13T14:57:53.471+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Request joining group due to: need to re-join with the given member-id: consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57 15:03:05 policy-pap | [2025-06-13T14:57:53.471+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] (Re-)joining group 15:03:05 policy-pap | [2025-06-13T14:57:54.350+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 15:03:05 policy-pap | [2025-06-13T14:57:54.352+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 15:03:05 policy-pap | [2025-06-13T14:57:54.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2 15:03:05 policy-pap | [2025-06-13T14:57:54.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 15:03:05 policy-pap | [2025-06-13T14:57:56.497+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Successfully joined group with generation Generation{generationId=1, memberId='consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57', protocol='range'} 15:03:05 policy-pap | [2025-06-13T14:57:56.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Finished assignment for group at generation 1: {consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57=Assignment(partitions=[policy-pdp-pap-0])} 15:03:05 policy-pap | [2025-06-13T14:57:56.565+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Successfully synced group in generation Generation{generationId=1, memberId='consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3-fd418bd8-fb03-4512-a790-425496c85d57', protocol='range'} 15:03:05 policy-pap | [2025-06-13T14:57:56.566+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 15:03:05 policy-pap | [2025-06-13T14:57:56.572+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Adding newly assigned partitions: policy-pdp-pap-0 15:03:05 policy-pap | [2025-06-13T14:57:56.588+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Found no committed offset for partition policy-pdp-pap-0 15:03:05 policy-pap | [2025-06-13T14:57:56.604+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fddb7771-28a4-4343-b8d7-b4045b0e6dfb-3, groupId=fddb7771-28a4-4343-b8d7-b4045b0e6dfb] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 15:03:05 policy-pap | [2025-06-13T14:57:57.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2', protocol='range'} 15:03:05 policy-pap | [2025-06-13T14:57:57.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2=Assignment(partitions=[policy-pdp-pap-0])} 15:03:05 policy-pap | [2025-06-13T14:57:57.377+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-955fec44-6790-46e8-a57f-50c11dd9b3c2', protocol='range'} 15:03:05 policy-pap | [2025-06-13T14:57:57.378+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 15:03:05 policy-pap | [2025-06-13T14:57:57.378+00:00|INFO|ConsumerRebalanceListenerInvoker|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 15:03:05 policy-pap | [2025-06-13T14:57:57.380+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 15:03:05 policy-pap | [2025-06-13T14:57:57.384+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 15:03:05 policy-pap | [2025-06-13T14:58:41.622+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 15:03:05 policy-pap | [2025-06-13T14:58:41.622+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 15:03:05 policy-pap | [2025-06-13T14:58:41.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 15:03:05 policy-pap | [2025-06-13T14:59:43.834+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 15:03:05 policy-pap | [] 15:03:05 policy-pap | [2025-06-13T14:59:43.835+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"30da412e-18d5-4103-a9b9-1b79bf0032e2","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749826783782","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:43.836+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Registration Message","response":null,"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"30da412e-18d5-4103-a9b9-1b79bf0032e2","pdpGroup":"opaGroup","pdpSubgroup":null,"timestampMs":"1749826783782","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:43.842+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:03:05 policy-pap | [2025-06-13T14:59:44.411+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T14:59:44.411+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T14:59:44.412+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T14:59:44.413+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=1f423618-4816-40d2-af95-c2f95c4a4e89, expireMs=1749826814413] 15:03:05 policy-pap | [2025-06-13T14:59:44.414+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.415+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=1f423618-4816-40d2-af95-c2f95c4a4e89, expireMs=1749826814413] 15:03:05 policy-pap | [2025-06-13T14:59:44.415+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T14:59:44.420+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"1f423618-4816-40d2-af95-c2f95c4a4e89","timestampMs":1749826784381,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.461+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"1f423618-4816-40d2-af95-c2f95c4a4e89","timestampMs":1749826784381,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.462+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T14:59:44.466+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.slice.capacity.check":"ewogICAgInRocmVzaG9sZCI6IDcwCn0="},"policy":{"slice.capacity.check":"cGFja2FnZSBzbGljZS5jYXBhY2l0eS5jaGVjawoKIyBEZWZhdWx0IHJ1bGUgdG8gZGVueSBpZiBubyBwb2xpY3kgbWF0Y2hlcwpkZWZhdWx0IGRlY2lzaW9uIDo9IHsKCSJyZXN1bHQiOiAiUGVybWl0IiwKCSJyZWFzb24iOiAiTm8gbWF0Y2hpbmcgcnVsZXMgYXBwbGllZCIsCn0KCiMgRGVueSBydWxlIGZvciBgc3N0ID0gMWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAxCglpbnB1dC50b3RhbF9yZXNvdXJjZSA+IGRhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGQKfQoKIyBEZW55IHJ1bGUgZm9yIGBzc3QgPSAyOWAgYW5kIGB0b3RhbF9yZXNvdXJjZSA+IDQwYApkZWNpc2lvbiA6PSB7CgkicmVzdWx0IjogIkRlbnkiLAoJInJlYXNvbiI6IHNwcmludGYoIlNsaWNpbmcgY2FwYWNpdHkgaW4gY2VsbCBjcm9zc2VzIGxpbWl0IG9mICV2IiwgW2RhdGEubm9kZS5zbGljZS5jYXBhY2l0eS5jaGVjay50aHJlc2hvbGRdKSwKfSBpZiB7CglpbnB1dC5hY3Rpb24gPT0gImNlbGxzbGljaW5nY2FwYWNpdHljaGVjayIKCWlucHV0LnNzdCA9PSAyOQoJaW5wdXQudG90YWxfcmVzb3VyY2UgPiBkYXRhLm5vZGUuc2xpY2UuY2FwYWNpdHkuY2hlY2sudGhyZXNob2xkCn0K"}},"name":"slice.capacity.check","version":"1.0.0","metadata":{"policy-id":"slice.capacity.check","policy-version":"1.0.0"}}],"messageName":"PDP_UPDATE","requestId":"1f423618-4816-40d2-af95-c2f95c4a4e89","timestampMs":1749826784381,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.467+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T14:59:44.497+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f423618-4816-40d2-af95-c2f95c4a4e89","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"ab322ff8-d4fd-4165-9d6e-38ab6ebcd98d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784485","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.498+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T14:59:44.498+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.498+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T14:59:44.498+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=1f423618-4816-40d2-af95-c2f95c4a4e89, expireMs=1749826814413] 15:03:05 policy-pap | [2025-06-13T14:59:44.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T14:59:44.499+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T14:59:44.503+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"1f423618-4816-40d2-af95-c2f95c4a4e89","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"slice.capacity.check\": \"1.0.0\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"ab322ff8-d4fd-4165-9d6e-38ab6ebcd98d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784485","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.505+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1f423618-4816-40d2-af95-c2f95c4a4e89 15:03:05 policy-pap | [2025-06-13T14:59:44.511+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 start publishing next request 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange starting 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange starting listener 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange starting timer 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"slice.capacity.check","policy-version":"1.0.0","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:03:05 policy-pap | [2025-06-13T14:59:44.512+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=408cff49-7a82-4c40-b611-4f2d9ab1965f, expireMs=1749826814512] 15:03:05 policy-pap | [2025-06-13T14:59:44.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange starting enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.514+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"408cff49-7a82-4c40-b611-4f2d9ab1965f","timestampMs":1749826784382,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.513+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=408cff49-7a82-4c40-b611-4f2d9ab1965f, expireMs=1749826814512] 15:03:05 policy-pap | [2025-06-13T14:59:44.513+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange started 15:03:05 policy-pap | [2025-06-13T14:59:44.527+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"408cff49-7a82-4c40-b611-4f2d9ab1965f","timestampMs":1749826784382,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.528+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 15:03:05 policy-pap | [2025-06-13T14:59:44.534+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"408cff49-7a82-4c40-b611-4f2d9ab1965f","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"40d89689-3927-47b3-bcc3-20ecf5869e25","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784523","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.535+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 408cff49-7a82-4c40-b611-4f2d9ab1965f 15:03:05 policy-pap | [2025-06-13T14:59:44.539+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {policy-notification=LEADER_NOT_AVAILABLE} 15:03:05 policy-pap | [2025-06-13T14:59:44.874+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"408cff49-7a82-4c40-b611-4f2d9ab1965f","timestampMs":1749826784382,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.875+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 15:03:05 policy-pap | [2025-06-13T14:59:44.876+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message to Pdp State Change","response":{"responseTo":"408cff49-7a82-4c40-b611-4f2d9ab1965f","responseStatus":"SUCCESS","responseMessage":"PDP State Changed From PASSIVE TO Active"},"policies":[],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"40d89689-3927-47b3-bcc3-20ecf5869e25","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784523","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange stopping 15:03:05 policy-pap | [2025-06-13T14:59:44.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange stopping enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange stopping timer 15:03:05 policy-pap | [2025-06-13T14:59:44.876+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=408cff49-7a82-4c40-b611-4f2d9ab1965f, expireMs=1749826814512] 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange stopping listener 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange stopped 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpStateChange successful 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 start publishing next request 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=d4991179-3f61-467b-b5fb-23820aa90cec, expireMs=1749826814877] 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T14:59:44.877+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d4991179-3f61-467b-b5fb-23820aa90cec","timestampMs":1749826784866,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.883+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d4991179-3f61-467b-b5fb-23820aa90cec","timestampMs":1749826784866,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.883+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T14:59:44.887+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d4991179-3f61-467b-b5fb-23820aa90cec","timestampMs":1749826784866,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T14:59:44.887+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T14:59:44.891+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d4991179-3f61-467b-b5fb-23820aa90cec","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"98e35d33-c13c-4f19-ae67-ff182b7db59a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784881","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.892+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d4991179-3f61-467b-b5fb-23820aa90cec 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"d4991179-3f61-467b-b5fb-23820aa90cec","responseStatus":"SUCCESS","responseMessage":"PDP UPDATE is successfull"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"98e35d33-c13c-4f19-ae67-ff182b7db59a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826784881","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d4991179-3f61-467b-b5fb-23820aa90cec, expireMs=1749826814877] 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T14:59:44.894+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T14:59:44.901+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T14:59:44.901+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T14:59:48.176+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 15:03:05 policy-pap | [2025-06-13T15:00:14.413+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=1f423618-4816-40d2-af95-c2f95c4a4e89, expireMs=1749826814413] 15:03:05 policy-pap | [2025-06-13T15:00:14.514+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=408cff49-7a82-4c40-b611-4f2d9ab1965f, expireMs=1749826814512] 15:03:05 policy-pap | [2025-06-13T15:00:43.799+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"a5a8f0c1-0979-44c7-8dd8-2755e1d24f3f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826843785","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:00:43.799+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"a5a8f0c1-0979-44c7-8dd8-2755e1d24f3f","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826843785","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:00:43.799+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:03:05 policy-pap | [2025-06-13T15:00:46.359+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:00:46.360+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-7] add policy zoneB 1.0.6 to subgroup opaGroup opa count=2 15:03:05 policy-pap | [2025-06-13T15:00:46.361+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering a deploy for policy zoneB 1.0.6 15:03:05 policy-pap | [2025-06-13T15:00:46.361+00:00|INFO|SessionData|http-nio-6969-exec-7] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=1 15:03:05 policy-pap | [2025-06-13T15:00:46.362+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:00:46.363+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:00:46.382+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=DEPLOYMENT, timestamp=2025-06-13T15:00:46Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:00:46.416+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:00:46.416+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:00:46.416+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:00:46.416+00:00|INFO|TimerManager|http-nio-6969-exec-7] update timer registered Timer [name=194dc0e9-80c2-4963-b95a-1439c0acd97d, expireMs=1749826876416] 15:03:05 policy-pap | [2025-06-13T15:00:46.416+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:00:46.417+00:00|INFO|ServiceManager|http-nio-6969-exec-7] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:00:46.417+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"194dc0e9-80c2-4963-b95a-1439c0acd97d","timestampMs":1749826846361,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:00:46.417+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=194dc0e9-80c2-4963-b95a-1439c0acd97d, expireMs=1749826876416] 15:03:05 policy-pap | [2025-06-13T15:00:46.424+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"194dc0e9-80c2-4963-b95a-1439c0acd97d","timestampMs":1749826846361,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:00:46.424+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:00:46.424+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.zoneB":"ewogICJ6b25lIjogewogICAgInpvbmVfYWNjZXNzX2xvZ3MiOiBbCiAgICAgIHsgImxvZ19pZCI6ICJsb2cxIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDA5OjAwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJncmFudGVkIiwgInVzZXIiOiAidXNlcjEiIH0sCiAgICAgIHsgImxvZ19pZCI6ICJsb2cyIiwgInRpbWVzdGFtcCI6ICIyMDI0LTExLTAxVDEwOjMwOjAwWiIsICJ6b25lX2lkIjogInpvbmVBIiwgImFjY2VzcyI6ICJkZW5pZWQiLCAidXNlciI6ICJ1c2VyMiIgfSwKICAgICAgeyAibG9nX2lkIjogImxvZzMiLCAidGltZXN0YW1wIjogIjIwMjQtMTEtMDFUMTE6MDA6MDBaIiwgInpvbmVfaWQiOiAiem9uZUIiLCAiYWNjZXNzIjogImdyYW50ZWQiLCAidXNlciI6ICJ1c2VyMyIgfQogICAgXQogIH0KfQ=="},"policy":{"zoneB":"cGFja2FnZSB6b25lQgogCmltcG9ydCByZWdvLnYxCiAKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQogCmFsbG93IGlmIHsKICAgIGhhc196b25lX2FjY2VzcwogICAgYWN0aW9uX2lzX2xvZ192aWV3Cn0KIAphY3Rpb25faXNfbG9nX3ZpZXcgaWYgewogICAgInZpZXciIGluIGlucHV0LmFjdGlvbnMKfQogCmhhc196b25lX2FjY2VzcyBjb250YWlucyBhY2Nlc3NfZGF0YSBpZiB7CiAgICBzb21lIHpvbmVfZGF0YSBpbiBkYXRhLm5vZGUuem9uZUIuem9uZS56b25lX2FjY2Vzc19sb2dzCiAgICB6b25lX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KICAgIHpvbmVfZGF0YS50aW1lc3RhbXAgPCBpbnB1dC50aW1lX3BlcmlvZC50bwogICAgem9uZV9kYXRhLnpvbmVfaWQgPT0gaW5wdXQuem9uZV9pZAogICAgYWNjZXNzX2RhdGEgOj0ge2RhdGF0eXBlOiB6b25lX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"zoneB","version":"1.0.6","metadata":{"policy-id":"zoneB","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"194dc0e9-80c2-4963-b95a-1439c0acd97d","timestampMs":1749826846361,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:00:46.424+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:00:46.471+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"194dc0e9-80c2-4963-b95a-1439c0acd97d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"84effe89-684b-40f8-9d0c-053273f9619a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826846460","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:00:46.472+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 194dc0e9-80c2-4963-b95a-1439c0acd97d 15:03:05 policy-pap | [2025-06-13T15:00:46.474+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"194dc0e9-80c2-4963-b95a-1439c0acd97d","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"zoneB","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"84effe89-684b-40f8-9d0c-053273f9619a","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826846460","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=194dc0e9-80c2-4963-b95a-1439c0acd97d, expireMs=1749826876416] 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:00:46.475+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:00:46.484+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:00:46.484+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:00:46.484+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:03:05 policy-pap | [2025-06-13T15:01:10.950+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:10.951+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-10] remove policy zoneB 1.0.6 from subgroup opaGroup opa count=1 15:03:05 policy-pap | [2025-06-13T15:01:10.951+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering an undeploy for policy zoneB 1.0.6 15:03:05 policy-pap | [2025-06-13T15:01:10.952+00:00|INFO|SessionData|http-nio-6969-exec-10] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=0 15:03:05 policy-pap | [2025-06-13T15:01:10.952+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:10.952+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:10.963+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=zoneB 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-13T15:01:10Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:01:10.976+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|TimerManager|http-nio-6969-exec-10] update timer registered Timer [name=034d61f0-e48f-404e-b0a7-5184bc7a67ad, expireMs=1749826900977] 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","timestampMs":1749826870952,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:10.977+00:00|INFO|ServiceManager|http-nio-6969-exec-10] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:01:10.985+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","timestampMs":1749826870952,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:10.985+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:10.988+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"zoneB","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","timestampMs":1749826870952,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:10.988+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:10.995+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"509cf10f-d836-4d21-a4e0-2823236d2ce3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826870986","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 034d61f0-e48f-404e-b0a7-5184bc7a67ad 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"034d61f0-e48f-404e-b0a7-5184bc7a67ad","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"zoneB\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"509cf10f-d836-4d21-a4e0-2823236d2ce3","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826870986","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=034d61f0-e48f-404e-b0a7-5184bc7a67ad, expireMs=1749826900977] 15:03:05 policy-pap | [2025-06-13T15:01:10.996+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:01:10.997+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:01:11.012+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:01:11.012+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:01:11.012+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"zoneB","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:03:05 policy-pap | [2025-06-13T15:01:11.438+00:00|INFO|SessionData|http-nio-6969-exec-8] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:11.440+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-8] failed to undeploy policy: zoneB null 15:03:05 policy-pap | [2025-06-13T15:01:11.440+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-8] undeploy policy failed 15:03:05 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: zoneB null 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:03:05 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:03:05 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:03:05 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:03:05 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:03:05 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:03:05 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:03:05 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:03:05 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:03:05 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:03:05 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:03:05 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:03:05 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:03:05 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:03:05 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:03:05 policy-pap | [2025-06-13T15:01:12.211+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:12.211+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-9] add policy vehicle 1.0.6 to subgroup opaGroup opa count=2 15:03:05 policy-pap | [2025-06-13T15:01:12.211+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy vehicle 1.0.6 15:03:05 policy-pap | [2025-06-13T15:01:12.211+00:00|INFO|SessionData|http-nio-6969-exec-9] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=1 15:03:05 policy-pap | [2025-06-13T15:01:12.212+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:12.212+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:12.219+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=DEPLOYMENT, timestamp=2025-06-13T15:01:12Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:01:12.227+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:01:12.227+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:01:12.228+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:01:12.228+00:00|INFO|TimerManager|http-nio-6969-exec-9] update timer registered Timer [name=6f98eed8-53b6-407c-ad5e-8d210b368c3a, expireMs=1749826902228] 15:03:05 policy-pap | [2025-06-13T15:01:12.228+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:01:12.228+00:00|INFO|ServiceManager|http-nio-6969-exec-9] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:01:12.229+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","timestampMs":1749826872211,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:12.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","timestampMs":1749826872211,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:12.237+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.vehicle":"ewogICJ2ZWhpY2xlcyI6IFsKICAgIHsgInZlaGljbGVfaWQiOiAidjEiLCAib3duZXIiOiAidXNlcjEiLCAidHlwZSI6ICJjYXIiLCAic3RhdHVzIjogImF2YWlsYWJsZSIgfSwKICAgIHsgInZlaGljbGVfaWQiOiAidjIiLCAib3duZXIiOiAidXNlcjIiLCAidHlwZSI6ICJiaWtlIiwgInN0YXR1cyI6ICJpbiB1c2UiIH0KICBdCn0K"},"policy":{"vehicle":"cGFja2FnZSB2ZWhpY2xlCgppbXBvcnQgIHJlZ28udjEKCmRlZmF1bHQgYWxsb3cgOj0gZmFsc2UKCmFsbG93IGlmIHsKICAgIHVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzCiAgICBhY3Rpb25faXNfZ3JhbnRlZAp9CgphY3Rpb25faXNfZ3JhbnRlZCBpZiB7CiAgICAidXNlIiBpbiBpbnB1dC5hY3Rpb25zCn0KCnVzZXJfaGFzX3ZlaGljbGVfYWNjZXNzIGNvbnRhaW5zIHZlaGljbGVfZGF0YSBpZiB7CiAgICBzb21lIHZlaGljbGUgaW4gZGF0YS5ub2RlLnZlaGljbGUudmVoaWNsZXMKICAgIHZlaGljbGUudmVoaWNsZV9pZCA9PSBpbnB1dC52ZWhpY2xlX2lkCiAgICB2ZWhpY2xlLm93bmVyID09IGlucHV0LnVzZXIKICAgIHZlaGljbGVfZGF0YSA6PSB7aW5mbzogdmVoaWNsZVtpbmZvXSB8IGluZm8gaW4gaW5wdXQuYXR0cmlidXRlc30KfQo="}},"name":"vehicle","version":"1.0.6","metadata":{"policy-id":"vehicle","policy-version":"1.0.6"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","timestampMs":1749826872211,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:12.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:12.237+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:12.276+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3ed0e407-b113-4d2c-a858-6366a51c9b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826872264","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:12.276+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"6f98eed8-53b6-407c-ad5e-8d210b368c3a","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"vehicle","version":"1.0.6"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3ed0e407-b113-4d2c-a858-6366a51c9b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826872264","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:12.277+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6f98eed8-53b6-407c-ad5e-8d210b368c3a 15:03:05 policy-pap | [2025-06-13T15:01:12.278+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:01:12.278+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:01:12.278+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:01:12.278+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6f98eed8-53b6-407c-ad5e-8d210b368c3a, expireMs=1749826902228] 15:03:05 policy-pap | [2025-06-13T15:01:12.279+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:01:12.279+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:01:12.287+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:01:12.287+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:01:12.287+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:03:05 policy-pap | [2025-06-13T15:01:16.417+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=194dc0e9-80c2-4963-b95a-1439c0acd97d, expireMs=1749826876416] 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-1] remove policy vehicle 1.0.6 from subgroup opaGroup opa count=1 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering an undeploy for policy vehicle 1.0.6 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|SessionData|http-nio-6969-exec-1] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=0 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:35.691+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:35.698+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=vehicle 1.0.6, action=UNDEPLOYMENT, timestamp=2025-06-13T15:01:35Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|TimerManager|http-nio-6969-exec-1] update timer registered Timer [name=045d2699-9d86-4a83-9873-ac3266bc9f6a, expireMs=1749826925708] 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|TimerManager|Thread-9] update timer waiting 30000ms Timer [name=045d2699-9d86-4a83-9873-ac3266bc9f6a, expireMs=1749826925708] 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|ServiceManager|http-nio-6969-exec-1] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:01:35.708+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"045d2699-9d86-4a83-9873-ac3266bc9f6a","timestampMs":1749826895691,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:35.714+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"045d2699-9d86-4a83-9873-ac3266bc9f6a","timestampMs":1749826895691,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:35.715+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:35.715+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"vehicle","version":"1.0.6"}],"messageName":"PDP_UPDATE","requestId":"045d2699-9d86-4a83-9873-ac3266bc9f6a","timestampMs":1749826895691,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:35.716+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:35.727+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"045d2699-9d86-4a83-9873-ac3266bc9f6a","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"e7fb1f7d-5271-4e89-b9da-fb8f6bac28f0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826895716","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=045d2699-9d86-4a83-9873-ac3266bc9f6a, expireMs=1749826925708] 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"045d2699-9d86-4a83-9873-ac3266bc9f6a","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"vehicle\": \"1.0.6\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"e7fb1f7d-5271-4e89-b9da-fb8f6bac28f0","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826895716","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:01:35.728+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 045d2699-9d86-4a83-9873-ac3266bc9f6a 15:03:05 policy-pap | [2025-06-13T15:01:35.735+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:01:35.735+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"vehicle","policy-version":"1.0.6","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:03:05 policy-pap | [2025-06-13T15:01:35.735+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:01:36.113+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:36.113+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-2] failed to undeploy policy: vehicle null 15:03:05 policy-pap | [2025-06-13T15:01:36.113+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-2] undeploy policy failed 15:03:05 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: vehicle null 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:03:05 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:03:05 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:03:05 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:03:05 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:03:05 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:03:05 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:03:05 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:03:05 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:03:05 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:03:05 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:03:05 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:03:05 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:03:05 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:03:05 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:03:05 policy-pap | [2025-06-13T15:01:36.826+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:36.827+00:00|INFO|PdpGroupDeployProvider|http-nio-6969-exec-3] add policy abac 1.0.7 to subgroup opaGroup opa count=2 15:03:05 policy-pap | [2025-06-13T15:01:36.827+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering a deploy for policy abac 1.0.7 15:03:05 policy-pap | [2025-06-13T15:01:36.827+00:00|INFO|SessionData|http-nio-6969-exec-3] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=1 15:03:05 policy-pap | [2025-06-13T15:01:36.827+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:36.827+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:01:36.833+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=DEPLOYMENT, timestamp=2025-06-13T15:01:36Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|TimerManager|http-nio-6969-exec-3] update timer registered Timer [name=ebd23f04-0f67-4ece-9cfd-c851a71b8632, expireMs=1749826926840] 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:01:36.840+00:00|INFO|ServiceManager|http-nio-6969-exec-3] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:01:36.841+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","timestampMs":1749826896827,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:36.851+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","timestampMs":1749826896827,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:36.851+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:36.853+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[{"type":"onap.policies.native.opa","type_version":"1.0.0","properties":{"data":{"node.abac":"ewogICAgInNlbnNvcl9kYXRhIjogWwogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDEiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiU3JpIExhbmthIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjI4IEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjYiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDAyIiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkNvbG9tYm8iLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMzAgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjEyMDAgbW0iLAogICAgICAgICAgICAid2luZHNwZWVkIjogIjYuMCBtL3MiLAogICAgICAgICAgICAiaHVtaWRpdHkiOiAiNDUlIiwKICAgICAgICAgICAgInBhcnRpY2xlX2RlbnNpdHkiOiAiMS41IGcvbCIsCiAgICAgICAgICAgICJ0aW1lc3RhbXAiOiAiMjAyNC0wMi0yNiIKICAgICAgICB9LAogICAgICAgIHsKICAgICAgICAgICAgImlkIjogIjAwMDMiLAogICAgICAgICAgICAibG9jYXRpb24iOiAiS2FuZHkiLAogICAgICAgICAgICAidGVtcGVyYXR1cmUiOiAiMjUgQyIsCiAgICAgICAgICAgICJwcmVjaXBpdGF0aW9uIjogIjgwMCBtbSIsCiAgICAgICAgICAgICJ3aW5kc3BlZWQiOiAiNC41IG0vcyIsCiAgICAgICAgICAgICJodW1pZGl0eSI6ICI2MCUiLAogICAgICAgICAgICAicGFydGljbGVfZGVuc2l0eSI6ICIxLjEgZy9sIiwKICAgICAgICAgICAgInRpbWVzdGFtcCI6ICIyMDI0LTAyLTI2IgogICAgICAgIH0sCiAgICAgICAgewogICAgICAgICAgICAiaWQiOiAiMDAwNCIsCiAgICAgICAgICAgICJsb2NhdGlvbiI6ICJHYWxsZSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI3LjIgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjMwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuOCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA1IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkphZmZuYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICItNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiMzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICIzLjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjIwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjAuOSBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjciCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA2IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIlRyaW5jb21hbGVlIiwKICAgICAgICAgICAgInRlbXBlcmF0dXJlIjogIjIwIEMiLAogICAgICAgICAgICAicHJlY2lwaXRhdGlvbiI6ICIxMDAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjU1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA3IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk51d2FyYSBFbGl5YSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyNSBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNjAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI0LjAgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjUwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuMyBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjgiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA4IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIkFudXJhZGhhcHVyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIyOCBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiNzAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI1LjggbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjQwJSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNCBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfSwKICAgICAgICB7CiAgICAgICAgICAgICJpZCI6ICIwMDA5IiwKICAgICAgICAgICAgImxvY2F0aW9uIjogIk1hdGFyYSIsCiAgICAgICAgICAgICJ0ZW1wZXJhdHVyZSI6ICIzMiBDIiwKICAgICAgICAgICAgInByZWNpcGl0YXRpb24iOiAiOTAwIG1tIiwKICAgICAgICAgICAgIndpbmRzcGVlZCI6ICI2LjUgbS9zIiwKICAgICAgICAgICAgImh1bWlkaXR5IjogIjY1JSIsCiAgICAgICAgICAgICJwYXJ0aWNsZV9kZW5zaXR5IjogIjEuNiBnL2wiLAogICAgICAgICAgICAidGltZXN0YW1wIjogIjIwMjQtMDItMjkiCiAgICAgICAgfQogICAgXQp9"},"policy":{"abac":"cGFja2FnZSBhYmFjCgppbXBvcnQgcmVnby52MQoKZGVmYXVsdCBhbGxvdyA6PSBmYWxzZQoKYWxsb3cgaWYgewogdmlld2FibGVfc2Vuc29yX2RhdGEKIGFjdGlvbl9pc19yZWFkCn0KCmFjdGlvbl9pc19yZWFkIGlmICJyZWFkIiBpbiBpbnB1dC5hY3Rpb25zCgp2aWV3YWJsZV9zZW5zb3JfZGF0YSBjb250YWlucyB2aWV3X2RhdGEgaWYgewogc29tZSBzZW5zb3JfZGF0YSBpbiBkYXRhLm5vZGUuYWJhYy5zZW5zb3JfZGF0YQogc2Vuc29yX2RhdGEudGltZXN0YW1wID49IGlucHV0LnRpbWVfcGVyaW9kLmZyb20KIHNlbnNvcl9kYXRhLnRpbWVzdGFtcCA8IGlucHV0LnRpbWVfcGVyaW9kLnRvCgogdmlld19kYXRhIDo9IHtkYXRhdHlwZTogc2Vuc29yX2RhdGFbZGF0YXR5cGVdIHwgZGF0YXR5cGUgaW4gaW5wdXQuZGF0YXR5cGVzfQp9"}},"name":"abac","version":"1.0.7","metadata":{"policy-id":"abac","policy-version":"1.0.7"}}],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","timestampMs":1749826896827,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:01:36.853+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:01:36.884+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3b38f5fd-a98a-4a06-88dc-dc820557aa5c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826896874","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:36.885+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ebd23f04-0f67-4ece-9cfd-c851a71b8632 15:03:05 policy-pap | [2025-06-13T15:01:36.888+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ebd23f04-0f67-4ece-9cfd-c851a71b8632","responseStatus":"SUCCESS","responseMessage":"PDP Update Successful for all policies: {\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"3b38f5fd-a98a-4a06-88dc-dc820557aa5c","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826896874","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ebd23f04-0f67-4ece-9cfd-c851a71b8632, expireMs=1749826926840] 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:01:36.889+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:01:36.901+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:01:36.901+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:01:36.901+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}],"undeployed-policies":[]} 15:03:05 policy-pap | [2025-06-13T15:01:44.499+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"656f0a92-2e3a-4d00-87c1-0613f8933b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826904489","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:44.499+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 15:03:05 policy-pap | [2025-06-13T15:01:44.501+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp heartbeat","response":null,"policies":[{"name":"slice.capacity.check","version":"1.0.0"},{"name":"abac","version":"1.0.7"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"656f0a92-2e3a-4d00-87c1-0613f8933b09","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826904489","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:01:48.187+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 15:03:05 policy-pap | [2025-06-13T15:02:01.505+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:02:01.505+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy abac 1.0.7 from subgroup opaGroup opa count=1 15:03:05 policy-pap | [2025-06-13T15:02:01.507+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy abac 1.0.7 15:03:05 policy-pap | [2025-06-13T15:02:01.507+00:00|INFO|SessionData|http-nio-6969-exec-6] add update opa-641304f2-5b4c-46df-814c-634a7e4652a2 opaGroup opa policies=0 15:03:05 policy-pap | [2025-06-13T15:02:01.507+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group opaGroup 15:03:05 policy-pap | [2025-06-13T15:02:01.507+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group opaGroup 15:03:05 policy-pap | [2025-06-13T15:02:01.517+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=opaGroup, pdpType=opa, policy=abac 1.0.7, action=UNDEPLOYMENT, timestamp=2025-06-13T15:02:01Z, user=policyadmin)] 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting listener 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting timer 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|TimerManager|http-nio-6969-exec-6] update timer registered Timer [name=ed3139d6-f3ac-4406-bc91-e8a08d677771, expireMs=1749826951527] 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate starting enqueue 15:03:05 policy-pap | [2025-06-13T15:02:01.527+00:00|INFO|ServiceManager|http-nio-6969-exec-6] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate started 15:03:05 policy-pap | [2025-06-13T15:02:01.528+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"ed3139d6-f3ac-4406-bc91-e8a08d677771","timestampMs":1749826921507,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:02:01.537+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"ed3139d6-f3ac-4406-bc91-e8a08d677771","timestampMs":1749826921507,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:02:01.537+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:02:01.539+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"source":"pap-cc06ddb7-6859-4b48-88dc-bf7e78840923","policiesToBeDeployed":[],"policiesToBeUndeployed":[{"name":"abac","version":"1.0.7"}],"messageName":"PDP_UPDATE","requestId":"ed3139d6-f3ac-4406-bc91-e8a08d677771","timestampMs":1749826921507,"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","pdpGroup":"opaGroup","pdpSubgroup":"opa"} 15:03:05 policy-pap | [2025-06-13T15:02:01.539+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 15:03:05 policy-pap | [2025-06-13T15:02:01.550+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ed3139d6-f3ac-4406-bc91-e8a08d677771","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"daf5328f-bc9a-4695-acd7-84c1240b5f8d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826921539","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:02:01.551+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id ed3139d6-f3ac-4406-bc91-e8a08d677771 15:03:05 policy-pap | [2025-06-13T15:02:01.551+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 15:03:05 policy-pap | {"messageName":"PDP_STATUS","pdpType":"opa","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp Status Response Message For Pdp Update","response":{"responseTo":"ed3139d6-f3ac-4406-bc91-e8a08d677771","responseStatus":"SUCCESS","responseMessage":"PDP Update Policies undeployed :,{\n \"abac\": \"1.0.7\"\n}"},"policies":[{"name":"slice.capacity.check","version":"1.0.0"}],"name":"opa-641304f2-5b4c-46df-814c-634a7e4652a2","requestId":"daf5328f-bc9a-4695-acd7-84c1240b5f8d","pdpGroup":"opaGroup","pdpSubgroup":"opa","timestampMs":"1749826921539","deploymentInstanceInfo":""} 15:03:05 policy-pap | [2025-06-13T15:02:01.551+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping 15:03:05 policy-pap | [2025-06-13T15:02:01.551+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping enqueue 15:03:05 policy-pap | [2025-06-13T15:02:01.552+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping timer 15:03:05 policy-pap | [2025-06-13T15:02:01.552+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=ed3139d6-f3ac-4406-bc91-e8a08d677771, expireMs=1749826951527] 15:03:05 policy-pap | [2025-06-13T15:02:01.552+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopping listener 15:03:05 policy-pap | [2025-06-13T15:02:01.552+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate stopped 15:03:05 policy-pap | [2025-06-13T15:02:01.563+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 PdpUpdate successful 15:03:05 policy-pap | [2025-06-13T15:02:01.563+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] opa-641304f2-5b4c-46df-814c-634a7e4652a2 has no more requests 15:03:05 policy-pap | [2025-06-13T15:02:01.563+00:00|INFO|network|Thread-8] [OUT|KAFKA|policy-notification] 15:03:05 policy-pap | {"deployed-policies":[],"undeployed-policies":[{"policy-type":"onap.policies.native.opa","policy-type-version":"1.0.0","policy-id":"abac","policy-version":"1.0.7","success-count":1,"failure-count":0,"incomplete-count":0}]} 15:03:05 policy-pap | [2025-06-13T15:02:01.889+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group opaGroup 15:03:05 policy-pap | [2025-06-13T15:02:01.890+00:00|WARN|PdpGroupDeleteProvider|http-nio-6969-exec-7] failed to undeploy policy: abac null 15:03:05 policy-pap | [2025-06-13T15:02:01.890+00:00|WARN|PdpGroupDeleteControllerV1|http-nio-6969-exec-7] undeploy policy failed 15:03:05 policy-pap | org.onap.policy.models.base.PfModelException: policy does not appear in any PDP group: abac null 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeployPolicy(PdpGroupDeleteProvider.java:108) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.ProviderBase.process(ProviderBase.java:161) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider.undeploy(PdpGroupDeleteProvider.java:92) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteProvider$$SpringCGLIB$$0.undeploy() 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.lambda$deletePolicy$1(PdpGroupDeleteControllerV1.java:107) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.doUndeployOperation(PdpGroupDeleteControllerV1.java:160) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1.deletePolicy(PdpGroupDeleteControllerV1.java:106) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:359) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) 15:03:05 policy-pap | at org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:64) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) 15:03:05 policy-pap | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) 15:03:05 policy-pap | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:728) 15:03:05 policy-pap | at org.onap.policy.pap.main.rest.PdpGroupDeleteControllerV1$$SpringCGLIB$$0.deletePolicy() 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) 15:03:05 policy-pap | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 15:03:05 policy-pap | at java.base/java.lang.reflect.Method.invoke(Method.java:569) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:258) 15:03:05 policy-pap | at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:191) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:986) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:891) 15:03:05 policy-pap | at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) 15:03:05 policy-pap | at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.doDelete(FrameworkServlet.java:936) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:659) 15:03:05 policy-pap | at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) 15:03:05 policy-pap | at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:723) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:108) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.lambda$doFilterInternal$3(FilterChainProxy.java:231) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$FilterObservation$SimpleFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:479) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$1(ObservationFilterChainDecorator.java:340) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator.lambda$wrapSecured$0(ObservationFilterChainDecorator.java:82) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:128) 15:03:05 policy-pap | at org.springframework.security.web.access.intercept.AuthorizationFilter.doFilter(AuthorizationFilter.java:101) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:126) 15:03:05 policy-pap | at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:120) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:100) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:179) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:213) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) 15:03:05 policy-pap | at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) 15:03:05 policy-pap | at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) 15:03:05 policy-pap | at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:227) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.wrapFilter(ObservationFilterChainDecorator.java:240) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$AroundFilterObservation$SimpleAroundFilterObservation.lambda$wrap$0(ObservationFilterChainDecorator.java:323) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$ObservationFilter.doFilter(ObservationFilterChainDecorator.java:224) 15:03:05 policy-pap | at org.springframework.security.web.ObservationFilterChainDecorator$VirtualFilterChain.doFilter(ObservationFilterChainDecorator.java:137) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) 15:03:05 policy-pap | at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) 15:03:05 policy-pap | at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) 15:03:05 policy-pap | at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) 15:03:05 policy-pap | at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:114) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) 15:03:05 policy-pap | at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) 15:03:05 policy-pap | at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) 15:03:05 policy-pap | at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) 15:03:05 policy-pap | at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) 15:03:05 policy-pap | at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) 15:03:05 policy-pap | at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:116) 15:03:05 policy-pap | at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) 15:03:05 policy-pap | at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) 15:03:05 policy-pap | at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) 15:03:05 policy-pap | at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:398) 15:03:05 policy-pap | at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) 15:03:05 policy-pap | at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:903) 15:03:05 policy-pap | at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1740) 15:03:05 policy-pap | at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1189) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:658) 15:03:05 policy-pap | at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) 15:03:05 policy-pap | at java.base/java.lang.Thread.run(Thread.java:840) 15:03:05 policy-pap | [2025-06-13T15:02:05.708+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=045d2699-9d86-4a83-9873-ac3266bc9f6a, expireMs=1749826925708] 15:03:05 postgres | The files belonging to this database system will be owned by user "postgres". 15:03:05 postgres | This user must also own the server process. 15:03:05 postgres | 15:03:05 postgres | The database cluster will be initialized with locale "en_US.utf8". 15:03:05 postgres | The default database encoding has accordingly been set to "UTF8". 15:03:05 postgres | The default text search configuration will be set to "english". 15:03:05 postgres | 15:03:05 postgres | Data page checksums are disabled. 15:03:05 postgres | 15:03:05 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok 15:03:05 postgres | creating subdirectories ... ok 15:03:05 postgres | selecting dynamic shared memory implementation ... posix 15:03:05 postgres | selecting default max_connections ... 100 15:03:05 postgres | selecting default shared_buffers ... 128MB 15:03:05 postgres | selecting default time zone ... Etc/UTC 15:03:05 postgres | creating configuration files ... ok 15:03:05 postgres | running bootstrap script ... ok 15:03:05 postgres | performing post-bootstrap initialization ... ok 15:03:05 postgres | initdb: warning: enabling "trust" authentication for local connections 15:03:05 postgres | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. 15:03:05 postgres | syncing data to disk ... ok 15:03:05 postgres | 15:03:05 postgres | 15:03:05 postgres | Success. You can now start the database server using: 15:03:05 postgres | 15:03:05 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start 15:03:05 postgres | 15:03:05 postgres | waiting for server to start....2025-06-13 14:56:49.164 UTC [47] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 15:03:05 postgres | 2025-06-13 14:56:49.180 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 15:03:05 postgres | 2025-06-13 14:56:49.203 UTC [50] LOG: database system was shut down at 2025-06-13 14:56:47 UTC 15:03:05 postgres | 2025-06-13 14:56:49.235 UTC [47] LOG: database system is ready to accept connections 15:03:05 postgres | done 15:03:05 postgres | server started 15:03:05 postgres | 15:03:05 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db-pg.conf 15:03:05 postgres | 15:03:05 postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db-pg.sh 15:03:05 postgres | #!/bin/bash -xv 15:03:05 postgres | # Copyright (C) 2022, 2024 Nordix Foundation. All rights reserved 15:03:05 postgres | # 15:03:05 postgres | # Licensed under the Apache License, Version 2.0 (the "License"); 15:03:05 postgres | # you may not use this file except in compliance with the License. 15:03:05 postgres | # You may obtain a copy of the License at 15:03:05 postgres | # 15:03:05 postgres | # http://www.apache.org/licenses/LICENSE-2.0 15:03:05 postgres | # 15:03:05 postgres | # Unless required by applicable law or agreed to in writing, software 15:03:05 postgres | # distributed under the License is distributed on an "AS IS" BASIS, 15:03:05 postgres | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15:03:05 postgres | # See the License for the specific language governing permissions and 15:03:05 postgres | # limitations under the License. 15:03:05 postgres | 15:03:05 postgres | psql -U postgres -d postgres --command "CREATE USER ${PGSQL_USER} WITH PASSWORD '${PGSQL_PASSWORD}';" 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE USER policy_user WITH PASSWORD '\''policy_user'\'';' 15:03:05 postgres | CREATE ROLE 15:03:05 postgres | 15:03:05 postgres | for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | do 15:03:05 postgres | psql -U postgres -d postgres --command "CREATE DATABASE ${db};" 15:03:05 postgres | psql -U postgres -d postgres --command "ALTER DATABASE ${db} OWNER TO ${PGSQL_USER} ;" 15:03:05 postgres | psql -U postgres -d postgres --command "GRANT ALL PRIVILEGES ON DATABASE ${db} TO ${PGSQL_USER} ;" 15:03:05 postgres | done 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE migration;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE migration OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE migration TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE pooling;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE pooling OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE pooling TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyadmin;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyadmin OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyadmin TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE policyclamp;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE policyclamp OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE policyclamp TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE operationshistory;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE operationshistory OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE operationshistory TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | + for db in migration pooling policyadmin policyclamp operationshistory clampacm 15:03:05 postgres | + psql -U postgres -d postgres --command 'CREATE DATABASE clampacm;' 15:03:05 postgres | CREATE DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'ALTER DATABASE clampacm OWNER TO policy_user ;' 15:03:05 postgres | ALTER DATABASE 15:03:05 postgres | + psql -U postgres -d postgres --command 'GRANT ALL PRIVILEGES ON DATABASE clampacm TO policy_user ;' 15:03:05 postgres | GRANT 15:03:05 postgres | 15:03:05 postgres | 2025-06-13 14:56:53.602 UTC [47] LOG: received fast shutdown request 15:03:05 postgres | waiting for server to shut down....2025-06-13 14:56:53.649 UTC [47] LOG: aborting any active transactions 15:03:05 postgres | 2025-06-13 14:56:53.654 UTC [47] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 15:03:05 postgres | 2025-06-13 14:56:53.654 UTC [48] LOG: shutting down 15:03:05 postgres | 2025-06-13 14:56:53.677 UTC [48] LOG: checkpoint starting: shutdown immediate 15:03:05 postgres | ...2025-06-13 14:56:57.337 UTC [48] LOG: checkpoint complete: wrote 5511 buffers (33.6%); 0 WAL file(s) added, 0 removed, 1 recycled; write=2.758 s, sync=0.764 s, total=3.683 s; sync files=1788, longest=0.070 s, average=0.001 s; distance=25535 kB, estimate=25535 kB; lsn=0/2DDA218, redo lsn=0/2DDA218 15:03:05 postgres | 2025-06-13 14:56:57.356 UTC [47] LOG: database system is shut down 15:03:05 postgres | done 15:03:05 postgres | server stopped 15:03:05 postgres | 15:03:05 postgres | PostgreSQL init process complete; ready for start up. 15:03:05 postgres | 15:03:05 postgres | 2025-06-13 14:56:57.460 UTC [1] LOG: starting PostgreSQL 16.4 (Debian 16.4-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 15:03:05 postgres | 2025-06-13 14:56:57.461 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 15:03:05 postgres | 2025-06-13 14:56:57.461 UTC [1] LOG: listening on IPv6 address "::", port 5432 15:03:05 postgres | 2025-06-13 14:56:57.468 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 15:03:05 postgres | 2025-06-13 14:56:57.480 UTC [100] LOG: database system was shut down at 2025-06-13 14:56:57 UTC 15:03:05 postgres | 2025-06-13 14:56:57.513 UTC [1] LOG: database system is ready to accept connections 15:03:05 postgres | 2025-06-13 15:01:57.574 UTC [98] LOG: checkpoint starting: time 15:03:05 postgres | 2025-06-13 15:03:02.925 UTC [98] LOG: checkpoint complete: wrote 655 buffers (4.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=65.318 s, sync=0.024 s, total=65.352 s; sync files=519, longest=0.002 s, average=0.001 s; distance=3563 kB, estimate=3563 kB; lsn=0/31574E8, redo lsn=0/3155000 15:03:05 prometheus | time=2025-06-13T14:56:44.159Z level=INFO source=main.go:674 msg="No time or size retention was set so using the default time retention" duration=15d 15:03:05 prometheus | time=2025-06-13T14:56:44.160Z level=INFO source=main.go:725 msg="Starting Prometheus Server" mode=server version="(version=3.4.1, branch=HEAD, revision=aea6503d9bbaad6c5faff3ecf6f1025213356c92)" 15:03:05 prometheus | time=2025-06-13T14:56:44.160Z level=INFO source=main.go:730 msg="operational information" build_context="(go=go1.24.3, platform=linux/amd64, user=root@16f976c24db1, date=20250531-10:44:38, tags=netgo,builtinassets,stringlabels)" host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" fd_limits="(soft=1048576, hard=1048576)" vm_limits="(soft=unlimited, hard=unlimited)" 15:03:05 prometheus | time=2025-06-13T14:56:44.160Z level=INFO source=main.go:806 msg="Leaving GOMAXPROCS=8: CPU quota undefined" component=automaxprocs 15:03:05 prometheus | time=2025-06-13T14:56:44.163Z level=INFO source=web.go:656 msg="Start listening for connections" component=web address=0.0.0.0:9090 15:03:05 prometheus | time=2025-06-13T14:56:44.163Z level=INFO source=main.go:1266 msg="Starting TSDB ..." 15:03:05 prometheus | time=2025-06-13T14:56:44.166Z level=INFO source=tls_config.go:347 msg="Listening on" component=web address=[::]:9090 15:03:05 prometheus | time=2025-06-13T14:56:44.166Z level=INFO source=tls_config.go:350 msg="TLS is disabled." component=web http2=false address=[::]:9090 15:03:05 prometheus | time=2025-06-13T14:56:44.170Z level=INFO source=head.go:657 msg="Replaying on-disk memory mappable chunks if any" component=tsdb 15:03:05 prometheus | time=2025-06-13T14:56:44.170Z level=INFO source=head.go:744 msg="On-disk memory mappable chunks replay completed" component=tsdb duration=1.42µs 15:03:05 prometheus | time=2025-06-13T14:56:44.170Z level=INFO source=head.go:752 msg="Replaying WAL, this may take a while" component=tsdb 15:03:05 prometheus | time=2025-06-13T14:56:44.170Z level=INFO source=head.go:825 msg="WAL segment loaded" component=tsdb segment=0 maxSegment=0 duration=347.992µs 15:03:05 prometheus | time=2025-06-13T14:56:44.170Z level=INFO source=head.go:862 msg="WAL replay completed" component=tsdb checkpoint_replay_duration=19.471µs wal_replay_duration=366.433µs wbl_replay_duration=160ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=1.42µs total_replay_duration=426.045µs 15:03:05 prometheus | time=2025-06-13T14:56:44.174Z level=INFO source=main.go:1287 msg="filesystem information" fs_type=EXT4_SUPER_MAGIC 15:03:05 prometheus | time=2025-06-13T14:56:44.174Z level=INFO source=main.go:1290 msg="TSDB started" 15:03:05 prometheus | time=2025-06-13T14:56:44.174Z level=INFO source=main.go:1475 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 15:03:05 prometheus | time=2025-06-13T14:56:44.176Z level=INFO source=main.go:1514 msg="updated GOGC" old=100 new=75 15:03:05 prometheus | time=2025-06-13T14:56:44.176Z level=INFO source=main.go:1524 msg="Completed loading of configuration file" db_storage=1.55µs remote_storage=1.72µs web_handler=600ns query_engine=1.09µs scrape=246.049µs scrape_sd=372.413µs notify=181.836µs notify_sd=32.902µs rules=2.24µs tracing=5.61µs filename=/etc/prometheus/prometheus.yml totalDuration=1.664119ms 15:03:05 prometheus | time=2025-06-13T14:56:44.176Z level=INFO source=main.go:1251 msg="Server is ready to receive web requests." 15:03:05 prometheus | time=2025-06-13T14:56:44.176Z level=INFO source=manager.go:175 msg="Starting rule manager..." component="rule manager" 15:03:05 zookeeper | ===> User 15:03:05 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 15:03:05 zookeeper | ===> Configuring ... 15:03:05 zookeeper | ===> Running preflight checks ... 15:03:05 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 15:03:05 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 15:03:05 zookeeper | ===> Launching ... 15:03:05 zookeeper | ===> Launching zookeeper ... 15:03:05 zookeeper | [2025-06-13 14:56:51,160] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,178] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,178] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,178] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,178] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,181] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 15:03:05 zookeeper | [2025-06-13 14:56:51,181] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 15:03:05 zookeeper | [2025-06-13 14:56:51,181] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 15:03:05 zookeeper | [2025-06-13 14:56:51,182] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 15:03:05 zookeeper | [2025-06-13 14:56:51,184] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 15:03:05 zookeeper | [2025-06-13 14:56:51,184] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,185] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,185] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,185] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,185] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 15:03:05 zookeeper | [2025-06-13 14:56:51,185] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 15:03:05 zookeeper | [2025-06-13 14:56:51,197] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3bbc39f8 (org.apache.zookeeper.server.ServerMetrics) 15:03:05 zookeeper | [2025-06-13 14:56:51,200] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 15:03:05 zookeeper | [2025-06-13 14:56:51,200] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 15:03:05 zookeeper | [2025-06-13 14:56:51,202] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:03:05 zookeeper | [2025-06-13 14:56:51,210] INFO (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,210] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,210] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,211] INFO (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:host.name=zookeeper (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.version=17.0.14 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.home=/usr/lib/jvm/temurin-17-jre (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-streams-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-transaction-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/connect-transforms-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-clients-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/connect-mirror-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.16.2.jar:/usr/bin/../share/java/kafka/scala-library-2.13.15.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.118.Final.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.16.2.jar:/usr/bin/../share/java/kafka/connect-runtime-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-afterburner-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/protobuf-java-3.25.5.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.16.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/maven-artifact-3.9.6.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/trogdor-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-server-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.15.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/commons-lang3-3.12.0.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.118.Final.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.6-4.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-tools-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.118.Final.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.16.2.jar:/usr/bin/../share/java/kafka/jackson-databind-2.16.2.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/opentelemetry-proto-1.0.0-alpha.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-json-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.16.2.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/kafka-raft-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/plexus-utils-3.5.1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.57.v20241219.jar:/usr/bin/../share/java/kafka/connect-api-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/commons-io-2.14.0.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.16.2.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/jackson-core-2.16.2.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.118.Final.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.118.Final.jar:/usr/bin/../share/java/kafka/kafka-storage-7.9.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.57.v20241219.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.library.path=/usr/local/lib64:/usr/local/lib::/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,212] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:os.memory.free=495MB (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,213] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,214] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 15:03:05 zookeeper | [2025-06-13 14:56:51,215] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,215] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,216] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 15:03:05 zookeeper | [2025-06-13 14:56:51,216] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,217] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 15:03:05 zookeeper | [2025-06-13 14:56:51,219] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,219] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,220] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 15:03:05 zookeeper | [2025-06-13 14:56:51,220] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 15:03:05 zookeeper | [2025-06-13 14:56:51,220] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,255] INFO Logging initialized @435ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 15:03:05 zookeeper | [2025-06-13 14:56:51,323] WARN o.e.j.s.ServletContextHandler@6150c3ec{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 15:03:05 zookeeper | [2025-06-13 14:56:51,323] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 15:03:05 zookeeper | [2025-06-13 14:56:51,341] INFO jetty-9.4.57.v20241219; built: 2025-01-08T21:24:30.412Z; git: df524e6b29271c2e09ba9aea83c18dc9db464a31; jvm 17.0.14+7 (org.eclipse.jetty.server.Server) 15:03:05 zookeeper | [2025-06-13 14:56:51,385] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 15:03:05 zookeeper | [2025-06-13 14:56:51,385] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 15:03:05 zookeeper | [2025-06-13 14:56:51,387] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 15:03:05 zookeeper | [2025-06-13 14:56:51,391] WARN ServletContext@o.e.j.s.ServletContextHandler@6150c3ec{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 15:03:05 zookeeper | [2025-06-13 14:56:51,402] INFO Started o.e.j.s.ServletContextHandler@6150c3ec{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 15:03:05 zookeeper | [2025-06-13 14:56:51,415] INFO Started ServerConnector@222545dc{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 15:03:05 zookeeper | [2025-06-13 14:56:51,415] INFO Started @601ms (org.eclipse.jetty.server.Server) 15:03:05 zookeeper | [2025-06-13 14:56:51,415] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,420] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,421] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,423] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,425] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,437] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,437] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 15:03:05 zookeeper | [2025-06-13 14:56:51,437] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 15:03:05 zookeeper | [2025-06-13 14:56:51,437] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 15:03:05 zookeeper | [2025-06-13 14:56:51,443] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 15:03:05 zookeeper | [2025-06-13 14:56:51,443] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:03:05 zookeeper | [2025-06-13 14:56:51,447] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 15:03:05 zookeeper | [2025-06-13 14:56:51,448] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 15:03:05 zookeeper | [2025-06-13 14:56:51,448] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 15:03:05 zookeeper | [2025-06-13 14:56:51,460] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 15:03:05 zookeeper | [2025-06-13 14:56:51,460] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 15:03:05 zookeeper | [2025-06-13 14:56:51,476] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 15:03:05 zookeeper | [2025-06-13 14:56:51,476] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 15:03:05 zookeeper | [2025-06-13 14:56:53,609] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 15:03:05 Tearing down containers... 15:03:06 Container policy-csit Stopping 15:03:06 Container policy-opa-pdp Stopping 15:03:06 Container grafana Stopping 15:03:06 Container policy-csit Stopped 15:03:06 Container policy-csit Removing 15:03:06 Container policy-csit Removed 15:03:06 Container grafana Stopped 15:03:06 Container grafana Removing 15:03:06 Container grafana Removed 15:03:06 Container prometheus Stopping 15:03:06 Container prometheus Stopped 15:03:06 Container prometheus Removing 15:03:06 Container prometheus Removed 15:03:16 Container policy-opa-pdp Stopped 15:03:16 Container policy-opa-pdp Removing 15:03:16 Container policy-opa-pdp Removed 15:03:16 Container policy-pap Stopping 15:03:26 Container policy-pap Stopped 15:03:26 Container policy-pap Removing 15:03:26 Container policy-pap Removed 15:03:26 Container policy-api Stopping 15:03:26 Container kafka Stopping 15:03:27 Container kafka Stopped 15:03:27 Container kafka Removing 15:03:27 Container kafka Removed 15:03:27 Container zookeeper Stopping 15:03:28 Container zookeeper Stopped 15:03:28 Container zookeeper Removing 15:03:28 Container zookeeper Removed 15:03:36 Container policy-api Stopped 15:03:36 Container policy-api Removing 15:03:36 Container policy-api Removed 15:03:36 Container policy-db-migrator Stopping 15:03:36 Container policy-db-migrator Stopped 15:03:36 Container policy-db-migrator Removing 15:03:37 Container policy-db-migrator Removed 15:03:37 Container postgres Stopping 15:03:37 Container postgres Stopped 15:03:37 Container postgres Removing 15:03:37 Container postgres Removed 15:03:37 Network compose_default Removing 15:03:37 Network compose_default Removed 15:03:37 $ ssh-agent -k 15:03:37 unset SSH_AUTH_SOCK; 15:03:37 unset SSH_AGENT_PID; 15:03:37 echo Agent pid 2050 killed; 15:03:37 [ssh-agent] Stopped. 15:03:37 Robot results publisher started... 15:03:37 INFO: Checking test criticality is deprecated and will be dropped in a future release! 15:03:37 -Parsing output xml: 15:03:38 Done! 15:03:38 -Copying log files to build dir: 15:03:38 Done! 15:03:38 -Assigning results to build: 15:03:38 Done! 15:03:38 -Checking thresholds: 15:03:38 Done! 15:03:38 Done publishing Robot results. 15:03:38 [PostBuildScript] - [INFO] Executing post build scripts. 15:03:38 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins3150204558544543396.sh 15:03:38 ---> sysstat.sh 15:03:38 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins2781529697232678244.sh 15:03:38 ---> package-listing.sh 15:03:38 ++ tr '[:upper:]' '[:lower:]' 15:03:38 ++ facter osfamily 15:03:38 + OS_FAMILY=debian 15:03:38 + workspace=/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp 15:03:38 + START_PACKAGES=/tmp/packages_start.txt 15:03:38 + END_PACKAGES=/tmp/packages_end.txt 15:03:38 + DIFF_PACKAGES=/tmp/packages_diff.txt 15:03:38 + PACKAGES=/tmp/packages_start.txt 15:03:38 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 15:03:38 + PACKAGES=/tmp/packages_end.txt 15:03:38 + case "${OS_FAMILY}" in 15:03:38 + dpkg -l 15:03:38 + grep '^ii' 15:03:38 + '[' -f /tmp/packages_start.txt ']' 15:03:38 + '[' -f /tmp/packages_end.txt ']' 15:03:38 + diff /tmp/packages_start.txt /tmp/packages_end.txt 15:03:38 + '[' /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp ']' 15:03:38 + mkdir -p /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 15:03:38 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/archives/ 15:03:38 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins11769428784446255548.sh 15:03:38 ---> capture-instance-metadata.sh 15:03:39 Setup pyenv: 15:03:39 system 15:03:39 3.8.13 15:03:39 3.9.13 15:03:39 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:03:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4ajy from file:/tmp/.os_lf_venv 15:03:41 lf-activate-venv(): INFO: Installing: lftools 15:03:49 lf-activate-venv(): INFO: Adding /tmp/venv-4ajy/bin to PATH 15:03:49 INFO: Running in OpenStack, capturing instance metadata 15:03:50 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins15616632399015554197.sh 15:03:50 provisioning config files... 15:03:50 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp@tmp/config11059553012543025769tmp 15:03:50 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 15:03:50 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 15:03:50 [EnvInject] - Injecting environment variables from a build step. 15:03:50 [EnvInject] - Injecting as environment variables the properties content 15:03:50 SERVER_ID=logs 15:03:50 15:03:50 [EnvInject] - Variables injected successfully. 15:03:50 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6371977392643967992.sh 15:03:50 ---> create-netrc.sh 15:03:50 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins8506823128160491503.sh 15:03:50 ---> python-tools-install.sh 15:03:50 Setup pyenv: 15:03:50 system 15:03:50 3.8.13 15:03:50 3.9.13 15:03:50 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:03:50 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4ajy from file:/tmp/.os_lf_venv 15:03:52 lf-activate-venv(): INFO: Installing: lftools 15:04:00 lf-activate-venv(): INFO: Adding /tmp/venv-4ajy/bin to PATH 15:04:00 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins18319135998476989600.sh 15:04:00 ---> sudo-logs.sh 15:04:00 Archiving 'sudo' log.. 15:04:00 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash /tmp/jenkins6500511610466874466.sh 15:04:00 ---> job-cost.sh 15:04:00 Setup pyenv: 15:04:01 system 15:04:01 3.8.13 15:04:01 3.9.13 15:04:01 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:04:01 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4ajy from file:/tmp/.os_lf_venv 15:04:03 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 15:04:07 lf-activate-venv(): INFO: Adding /tmp/venv-4ajy/bin to PATH 15:04:07 INFO: No Stack... 15:04:08 INFO: Retrieving Pricing Info for: v3-standard-8 15:04:08 INFO: Archiving Costs 15:04:08 [policy-opa-pdp-master-project-csit-verify-opa-pdp] $ /bin/bash -l /tmp/jenkins11584958955091923149.sh 15:04:08 ---> logs-deploy.sh 15:04:08 Setup pyenv: 15:04:08 system 15:04:08 3.8.13 15:04:08 3.9.13 15:04:08 * 3.10.6 (set by /w/workspace/policy-opa-pdp-master-project-csit-verify-opa-pdp/.python-version) 15:04:08 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-4ajy from file:/tmp/.os_lf_venv 15:04:10 lf-activate-venv(): INFO: Installing: lftools 15:04:18 lf-activate-venv(): INFO: Adding /tmp/venv-4ajy/bin to PATH 15:04:18 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-opa-pdp-master-project-csit-verify-opa-pdp/159 15:04:18 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 15:04:19 Archives upload complete. 15:04:19 INFO: archiving logs to Nexus 15:04:20 ---> uname -a: 15:04:20 Linux prd-ubuntu1804-docker-8c-8g-20900 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 15:04:20 15:04:20 15:04:20 ---> lscpu: 15:04:20 Architecture: x86_64 15:04:20 CPU op-mode(s): 32-bit, 64-bit 15:04:20 Byte Order: Little Endian 15:04:20 CPU(s): 8 15:04:20 On-line CPU(s) list: 0-7 15:04:20 Thread(s) per core: 1 15:04:20 Core(s) per socket: 1 15:04:20 Socket(s): 8 15:04:20 NUMA node(s): 1 15:04:20 Vendor ID: AuthenticAMD 15:04:20 CPU family: 23 15:04:20 Model: 49 15:04:20 Model name: AMD EPYC-Rome Processor 15:04:20 Stepping: 0 15:04:20 CPU MHz: 2800.000 15:04:20 BogoMIPS: 5600.00 15:04:20 Virtualization: AMD-V 15:04:20 Hypervisor vendor: KVM 15:04:20 Virtualization type: full 15:04:20 L1d cache: 32K 15:04:20 L1i cache: 32K 15:04:20 L2 cache: 512K 15:04:20 L3 cache: 16384K 15:04:20 NUMA node0 CPU(s): 0-7 15:04:20 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 15:04:20 15:04:20 15:04:20 ---> nproc: 15:04:20 8 15:04:20 15:04:20 15:04:20 ---> df -h: 15:04:20 Filesystem Size Used Avail Use% Mounted on 15:04:20 udev 16G 0 16G 0% /dev 15:04:20 tmpfs 3.2G 708K 3.2G 1% /run 15:04:20 /dev/vda1 155G 15G 141G 10% / 15:04:20 tmpfs 16G 0 16G 0% /dev/shm 15:04:20 tmpfs 5.0M 0 5.0M 0% /run/lock 15:04:20 tmpfs 16G 0 16G 0% /sys/fs/cgroup 15:04:20 /dev/vda15 105M 4.4M 100M 5% /boot/efi 15:04:20 tmpfs 3.2G 0 3.2G 0% /run/user/1001 15:04:20 15:04:20 15:04:20 ---> free -m: 15:04:20 total used free shared buff/cache available 15:04:20 Mem: 32167 891 24037 0 7238 30820 15:04:20 Swap: 1023 0 1023 15:04:20 15:04:20 15:04:20 ---> ip addr: 15:04:20 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 15:04:20 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 15:04:20 inet 127.0.0.1/8 scope host lo 15:04:20 valid_lft forever preferred_lft forever 15:04:20 inet6 ::1/128 scope host 15:04:20 valid_lft forever preferred_lft forever 15:04:20 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 15:04:20 link/ether fa:16:3e:2c:35:a6 brd ff:ff:ff:ff:ff:ff 15:04:20 inet 10.30.106.73/23 brd 10.30.107.255 scope global dynamic ens3 15:04:20 valid_lft 85759sec preferred_lft 85759sec 15:04:20 inet6 fe80::f816:3eff:fe2c:35a6/64 scope link 15:04:20 valid_lft forever preferred_lft forever 15:04:20 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 15:04:20 link/ether 02:42:ad:87:72:38 brd ff:ff:ff:ff:ff:ff 15:04:20 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 15:04:20 valid_lft forever preferred_lft forever 15:04:20 inet6 fe80::42:adff:fe87:7238/64 scope link 15:04:20 valid_lft forever preferred_lft forever 15:04:20 15:04:20 15:04:20 ---> sar -b -r -n DEV: 15:04:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20900) 06/13/25 _x86_64_ (8 CPU) 15:04:20 15:04:20 14:53:42 LINUX RESTART (8 CPU) 15:04:20 15:04:20 14:54:01 tps rtps wtps bread/s bwrtn/s 15:04:20 14:55:01 291.30 66.07 225.23 4298.75 99971.07 15:04:20 14:56:01 242.98 19.86 223.12 2325.54 98808.97 15:04:20 14:57:01 432.92 3.03 429.89 431.86 158014.00 15:04:20 14:58:01 132.96 0.05 132.91 2.53 6312.68 15:04:20 14:59:01 7.07 0.02 7.05 0.13 1379.10 15:04:20 15:00:01 43.68 0.17 43.51 21.46 6905.65 15:04:20 15:01:01 179.46 0.20 179.26 10.13 27073.02 15:04:20 15:02:01 9.07 0.00 9.07 0.00 199.57 15:04:20 15:03:01 11.73 0.00 11.73 0.00 282.04 15:04:20 15:04:01 60.32 1.37 58.96 108.12 1378.70 15:04:20 Average: 141.18 9.08 132.10 720.23 40048.80 15:04:20 15:04:20 14:54:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 15:04:20 14:55:01 30140184 31615588 2799036 8.50 57404 1736012 1507764 4.44 941636 1580644 95192 15:04:20 14:56:01 25462472 31590444 7476748 22.70 127868 6138308 2035600 5.99 1042924 5915616 3174276 15:04:20 14:57:01 24570572 31067432 8368648 25.41 153152 6434828 6042672 17.78 1710972 6030104 5144 15:04:20 14:58:01 23215324 29872472 9723896 29.52 163576 6587360 7512984 22.10 2995016 6082644 38416 15:04:20 14:59:01 23188140 29845580 9751080 29.60 163784 6587856 7722980 22.72 3022084 6081512 152 15:04:20 15:00:01 22887516 29747744 10051704 30.52 176380 6756260 8272536 24.34 3140244 6235480 468 15:04:20 15:01:01 22506920 29705108 10432300 31.67 204524 7034332 8090784 23.80 3274884 6446292 2108 15:04:20 15:02:01 22471676 29671376 10467544 31.78 204684 7035192 8078568 23.77 3316412 6440712 728 15:04:20 15:03:01 22489912 29689876 10449308 31.72 204772 7035276 8111160 23.86 3296348 6440328 240 15:04:20 15:04:01 24650840 31594780 8288380 25.16 206320 6773052 1554792 4.57 1454940 6196576 29020 15:04:20 Average: 24158356 30440040 8780864 26.66 166246 6211848 5892984 17.34 2419546 5744991 334574 15:04:20 15:04:20 14:54:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 15:04:20 14:55:01 ens3 514.71 325.46 1672.01 81.85 0.00 0.00 0.00 0.00 15:04:20 14:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 14:55:01 lo 1.47 1.47 0.16 0.16 0.00 0.00 0.00 0.00 15:04:20 14:56:01 ens3 1169.27 702.78 33074.96 59.32 0.00 0.00 0.00 0.00 15:04:20 14:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 14:56:01 lo 13.76 13.76 1.27 1.27 0.00 0.00 0.00 0.00 15:04:20 14:57:01 ens3 9.56 8.86 2.27 2.83 0.00 0.00 0.00 0.00 15:04:20 14:57:01 vethb61bdea 1.63 1.92 0.17 0.19 0.00 0.00 0.00 0.00 15:04:20 14:57:01 vethe619803 0.22 0.32 0.01 0.02 0.00 0.00 0.00 0.00 15:04:20 14:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 14:58:01 ens3 50.99 38.33 311.64 3.11 0.00 0.00 0.00 0.00 15:04:20 14:58:01 vethb61bdea 7.43 6.70 1.05 0.97 0.00 0.00 0.00 0.00 15:04:20 14:58:01 vethe619803 0.93 1.07 0.06 0.06 0.00 0.00 0.00 0.00 15:04:20 14:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 14:59:01 ens3 0.93 0.75 0.12 0.24 0.00 0.00 0.00 0.00 15:04:20 14:59:01 vethb61bdea 11.48 7.82 0.95 1.09 0.00 0.00 0.00 0.00 15:04:20 14:59:01 vethe619803 1.60 2.22 0.21 0.24 0.00 0.00 0.00 0.00 15:04:20 14:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 15:00:01 ens3 52.04 30.01 848.73 2.79 0.00 0.00 0.00 0.00 15:04:20 15:00:01 vethb61bdea 15.61 10.85 1.59 1.60 0.00 0.00 0.00 0.00 15:04:20 15:00:01 vethe619803 3.32 5.12 0.53 0.58 0.00 0.00 0.00 0.00 15:04:20 15:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15:04:20 15:01:01 veth1a544bc 0.45 0.52 0.08 0.13 0.00 0.00 0.00 0.00 15:04:20 15:01:01 ens3 183.88 112.62 1348.67 9.47 0.00 0.00 0.00 0.00 15:04:20 15:01:01 vethb61bdea 14.42 9.63 1.36 1.42 0.00 0.00 0.00 0.00 15:04:20 15:01:01 vethe619803 3.16 4.96 0.51 0.54 0.00 0.00 0.00 0.00 15:04:20 15:02:01 veth1a544bc 5.38 4.37 0.82 0.90 0.00 0.00 0.00 0.00 15:04:20 15:02:01 ens3 0.88 0.73 0.18 0.30 0.00 0.00 0.00 0.00 15:04:20 15:02:01 vethb61bdea 17.26 13.10 2.14 1.95 0.00 0.00 0.00 0.00 15:04:20 15:02:01 vethe619803 5.07 7.18 0.80 1.02 0.00 0.00 0.00 0.00 15:04:20 15:03:01 veth1a544bc 1.70 1.68 0.11 0.12 0.00 0.00 0.00 0.00 15:04:20 15:03:01 ens3 1.57 1.37 0.55 1.82 0.00 0.00 0.00 0.00 15:04:20 15:03:01 vethb61bdea 13.73 9.08 1.15 1.29 0.00 0.00 0.00 0.00 15:04:20 15:03:01 vethe619803 4.33 5.98 0.54 0.54 0.00 0.00 0.00 0.00 15:04:20 15:04:01 ens3 51.82 43.69 70.87 34.20 0.00 0.00 0.00 0.00 15:04:20 15:04:01 docker0 111.20 181.62 7.39 1348.93 0.00 0.00 0.00 0.00 15:04:20 15:04:01 lo 31.39 31.39 2.76 2.76 0.00 0.00 0.00 0.00 15:04:20 Average: ens3 203.82 126.61 3740.58 19.60 0.00 0.00 0.00 0.00 15:04:20 Average: docker0 11.12 18.16 0.74 134.85 0.00 0.00 0.00 0.00 15:04:20 Average: lo 2.81 2.81 0.25 0.25 0.00 0.00 0.00 0.00 15:04:20 15:04:20 15:04:20 ---> sar -P ALL: 15:04:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-20900) 06/13/25 _x86_64_ (8 CPU) 15:04:20 15:04:20 14:53:42 LINUX RESTART (8 CPU) 15:04:20 15:04:20 14:54:01 CPU %user %nice %system %iowait %steal %idle 15:04:20 14:55:01 all 8.26 0.00 1.10 6.97 0.04 83.62 15:04:20 14:55:01 0 2.58 0.00 0.89 17.64 0.03 78.86 15:04:20 14:55:01 1 6.54 0.00 1.78 0.67 0.03 90.98 15:04:20 14:55:01 2 4.04 0.00 0.89 0.28 0.03 94.76 15:04:20 14:55:01 3 22.46 0.00 1.24 4.35 0.05 71.90 15:04:20 14:55:01 4 7.26 0.00 1.02 0.42 0.05 91.26 15:04:20 14:55:01 5 3.35 0.00 0.62 26.76 0.05 69.23 15:04:20 14:55:01 6 9.79 0.00 0.92 1.46 0.05 87.78 15:04:20 14:55:01 7 10.10 0.00 1.47 4.21 0.07 84.15 15:04:20 14:56:01 all 15.75 0.00 5.02 8.54 0.07 70.62 15:04:20 14:56:01 0 17.49 0.00 5.22 3.93 0.07 73.29 15:04:20 14:56:01 1 10.93 0.00 5.15 2.25 0.05 81.62 15:04:20 14:56:01 2 11.64 0.00 4.31 1.94 0.05 82.07 15:04:20 14:56:01 3 15.20 0.00 4.95 11.57 0.08 68.20 15:04:20 14:56:01 4 34.20 0.00 5.71 5.17 0.08 54.84 15:04:20 14:56:01 5 11.45 0.00 4.48 21.19 0.07 62.80 15:04:20 14:56:01 6 14.21 0.00 5.52 20.59 0.07 59.61 15:04:20 14:56:01 7 10.94 0.00 4.85 1.70 0.05 82.46 15:04:20 14:57:01 all 7.54 0.00 2.83 15.46 0.08 74.09 15:04:20 14:57:01 0 5.28 0.00 2.67 3.01 0.17 88.88 15:04:20 14:57:01 1 7.10 0.00 2.83 9.15 0.07 80.85 15:04:20 14:57:01 2 8.79 0.00 4.71 56.51 0.10 29.89 15:04:20 14:57:01 3 10.19 0.00 2.62 11.88 0.07 75.25 15:04:20 14:57:01 4 10.19 0.00 2.83 3.03 0.07 83.87 15:04:20 14:57:01 5 6.53 0.00 2.74 26.16 0.07 64.50 15:04:20 14:57:01 6 6.24 0.00 2.35 9.23 0.05 82.12 15:04:20 14:57:01 7 6.03 0.00 1.90 5.12 0.07 86.88 15:04:20 14:58:01 all 24.14 0.00 3.11 7.79 0.09 64.88 15:04:20 14:58:01 0 29.83 0.00 3.68 13.19 0.10 53.19 15:04:20 14:58:01 1 19.09 0.00 2.88 1.83 0.08 76.12 15:04:20 14:58:01 2 26.27 0.00 3.24 2.47 0.08 67.94 15:04:20 14:58:01 3 21.79 0.00 2.70 14.46 0.08 60.96 15:04:20 14:58:01 4 25.33 0.00 3.18 7.97 0.10 63.41 15:04:20 14:58:01 5 26.57 0.00 3.48 4.90 0.08 64.96 15:04:20 14:58:01 6 22.37 0.00 2.74 14.53 0.10 60.26 15:04:20 14:58:01 7 21.87 0.00 3.00 2.95 0.08 72.10 15:04:20 14:59:01 all 1.19 0.00 0.18 0.09 0.03 98.50 15:04:20 14:59:01 0 0.80 0.00 0.15 0.37 0.02 98.67 15:04:20 14:59:01 1 1.63 0.00 0.30 0.00 0.03 98.03 15:04:20 14:59:01 2 1.15 0.00 0.15 0.03 0.02 98.65 15:04:20 14:59:01 3 1.39 0.00 0.13 0.00 0.02 98.46 15:04:20 14:59:01 4 0.97 0.00 0.20 0.23 0.03 98.56 15:04:20 14:59:01 5 0.98 0.00 0.10 0.00 0.02 98.90 15:04:20 14:59:01 6 1.62 0.00 0.22 0.00 0.03 98.13 15:04:20 14:59:01 7 0.90 0.00 0.17 0.10 0.03 98.80 15:04:20 15:00:01 all 2.99 0.00 0.72 0.48 0.04 95.77 15:04:20 15:00:01 0 2.99 0.00 0.47 0.17 0.05 96.32 15:04:20 15:00:01 1 3.21 0.00 0.99 0.22 0.05 95.54 15:04:20 15:00:01 2 3.02 0.00 0.63 0.27 0.03 96.04 15:04:20 15:00:01 3 2.84 0.00 0.67 0.08 0.05 96.36 15:04:20 15:00:01 4 2.33 0.00 1.14 1.09 0.05 95.40 15:04:20 15:00:01 5 3.54 0.00 0.62 0.02 0.05 95.78 15:04:20 15:00:01 6 2.52 0.00 0.59 1.71 0.05 95.14 15:04:20 15:00:01 7 3.48 0.00 0.70 0.28 0.03 95.50 15:04:20 15:01:01 all 8.25 0.00 2.07 1.76 0.07 87.86 15:04:20 15:01:01 0 5.82 0.00 2.04 0.97 0.05 91.12 15:04:20 15:01:01 1 5.25 0.00 1.24 0.02 0.05 93.44 15:04:20 15:01:01 2 16.03 0.00 3.03 4.53 0.08 76.33 15:04:20 15:01:01 3 12.94 0.00 2.25 0.39 0.05 84.38 15:04:20 15:01:01 4 7.59 0.00 1.84 6.59 0.18 83.80 15:04:20 15:01:01 5 5.49 0.00 1.71 0.45 0.07 92.28 15:04:20 15:01:01 6 5.18 0.00 2.18 0.17 0.03 92.44 15:04:20 15:01:01 7 7.72 0.00 2.24 0.95 0.03 89.05 15:04:20 15:02:01 all 3.60 0.00 0.65 0.08 0.31 95.37 15:04:20 15:02:01 0 3.75 0.00 0.63 0.02 0.03 95.56 15:04:20 15:02:01 1 2.99 0.00 0.92 0.00 0.03 96.05 15:04:20 15:02:01 2 5.19 0.00 0.64 0.30 0.05 93.82 15:04:20 15:02:01 3 3.61 0.00 0.33 0.00 0.05 96.01 15:04:20 15:02:01 4 3.48 0.00 1.00 0.03 0.81 94.67 15:04:20 15:02:01 5 2.58 0.00 0.87 0.30 0.63 95.62 15:04:20 15:02:01 6 4.11 0.00 0.52 0.00 0.05 95.32 15:04:20 15:02:01 7 3.04 0.00 0.28 0.00 0.85 95.82 15:04:20 15:03:01 all 0.86 0.00 0.18 0.04 0.03 98.89 15:04:20 15:03:01 0 1.52 0.00 0.25 0.08 0.05 98.10 15:04:20 15:03:01 1 0.42 0.00 0.13 0.00 0.02 99.43 15:04:20 15:03:01 2 0.47 0.00 0.13 0.22 0.03 99.15 15:04:20 15:03:01 3 0.23 0.00 0.10 0.00 0.00 99.67 15:04:20 15:03:01 4 0.72 0.00 0.18 0.00 0.05 99.05 15:04:20 15:03:01 5 1.75 0.00 0.25 0.00 0.03 97.96 15:04:20 15:03:01 6 0.82 0.00 0.27 0.02 0.03 98.86 15:04:20 15:03:01 7 0.93 0.00 0.12 0.00 0.00 98.95 15:04:20 15:04:01 all 5.65 0.00 0.84 0.19 0.03 93.29 15:04:20 15:04:01 0 5.17 0.00 1.02 0.07 0.03 93.71 15:04:20 15:04:01 1 4.44 0.00 0.80 0.03 0.03 94.69 15:04:20 15:04:01 2 27.21 0.00 1.45 0.22 0.07 71.05 15:04:20 15:04:01 3 1.42 0.00 0.73 0.08 0.03 97.73 15:04:20 15:04:01 4 1.87 0.00 0.72 0.05 0.02 97.34 15:04:20 15:04:01 5 1.82 0.00 0.57 0.03 0.05 97.53 15:04:20 15:04:01 6 1.80 0.00 0.89 0.08 0.02 97.21 15:04:20 15:04:01 7 1.44 0.00 0.57 0.92 0.03 97.04 15:04:20 Average: all 7.81 0.00 1.67 4.13 0.08 86.31 15:04:20 Average: 0 7.52 0.00 1.70 3.94 0.06 86.78 15:04:20 Average: 1 6.16 0.00 1.70 1.41 0.05 90.68 15:04:20 Average: 2 10.38 0.00 1.91 6.62 0.06 81.04 15:04:20 Average: 3 9.19 0.00 1.57 4.27 0.05 84.92 15:04:20 Average: 4 9.36 0.00 1.78 2.45 0.15 86.27 15:04:20 Average: 5 6.39 0.00 1.54 7.96 0.11 84.00 15:04:20 Average: 6 6.87 0.00 1.62 4.77 0.05 86.70 15:04:20 Average: 7 6.64 0.00 1.53 1.62 0.13 90.09 15:04:20 15:04:20 15:04:20